- Nov 15, 2018
-
-
Matt Harbison authored
The previous message was too debug-ish and less action oriented than a hint should be. The remaining errors that aren't handled are more along the lines of programming errors (not using POST, bad accept type, etc), so I'm not bothering with that. The friendly errors purposely use `self.baseurl` instead of the full Batch API endpoint because I'd expect some copy/paste/modify on the part of the user here, and it would be more confusing if '/objects/batch' magically appeared, but shouldn't be used in the config setting. It still seems like the right thing for debugging in the catchall case.
-
Matt Harbison authored
A coworker had a typo in `lfs.url`, forgot it was even set because usually the blob server is inferred, and then got a 404. It would have been easier to debug with the failing URL printed.
-
- Oct 22, 2018
-
-
Matt Harbison authored
I added a testcase for lfs to all narrow tests, and the following failed: test-narrow-acl.t test-narrow-exchange.t test-narrow-patterns.t test-narrow-strip.t test-narrow-trackedcmd.t test-narrow-widen.t test-narrow.t The first two still have errors in the pretxnchangegroup on clone and (receiving a) push, which I'm still looking into (4d63f3bc1e1a fixed something in this area already). These two modified tests seem to cover the things that failed in the remaining narrow tests, i.e. `hg tracked` and `hg strip`, so I didn't bother enabling the testcases elsewhere. Maybe we should, but it's 68 tests total.
-
- Oct 19, 2018
-
-
Matt Harbison authored
This is in the spirit of bcf72d7b1524.
-
- Sep 21, 2018
-
-
Matt Harbison authored
This is based on a patch by Gregory Szorc. I made small adjustments to clean up the messaging when the server has the extension enabled, but the client has it disabled (to prevent autoloading). Additionally, I added a second server capability to distinguish between the server having the extension enabled, and the server having LFS commits. This helps prevent unnecessary requirement propagation- the client shouldn't add a requirement that the server doesn't have, just because the server had the extension loaded. The TODO I had about advertising a capability when the server can natively serve up blobs isn't relevant anymore (we've had 2 releases that support this), so I dropped it. Currently, we lazily add the "lfs" requirement to a repo when we first encounter LFS data. Due to a pretxnchangegroup hook that looks for LFS data, this can happen at the end of clone. Now that we have more control over how repositories are created, we can do better. This commit adds a repo creation option to add the "lfs" requirement. hg.clone() sets this creation option if the remote peer is advertising lfs usage (as opposed to just support needed to push). So, what this change effectively does is have cloned repos automatically inherit the "lfs" requirement. Differential Revision: https://phab.mercurial-scm.org/D5130
-
- Oct 16, 2018
-
-
Augie Fackler authored
-
- Oct 04, 2018
-
-
Matt Harbison authored
Previously, enabling the extension for any repo in commandserver or hgweb would enable the flags on all repos. Since localrepo.resolverevlogstorevfsoptions() is called so early, the check to see if the extension is enabled on the repo (which hasn't been instantiated yet) is a bit awkward. But I don't see a better way.
-
- Oct 10, 2018
-
-
Matt Harbison authored
A coworker hit this once yesterday when pulling in thg (a retry worked), and then I hit it with strip after a pull. I had a difficult time recreating a test for this (at least one of the tricks was to not use '-R', which seems to cause reposetup() to be called for each command), so I'm not sure how large of a window there actually is for this. Calling reposetup() *after* the requirement is added will skip the hook entirely. The other issue I had was adding a couple `ui.status()` lines around the check that installs the hook. On Windows, the cmdserver process ballooned to 1.6GB and hung. Changing that to `ui.warn()` avoided the hang. It also hung on macOS, but without the large memory usage.
-
- Oct 02, 2018
-
-
Matt Harbison authored
This is needed to keep py3 happy. The other two instances of sorting already did this.
-
- Sep 24, 2018
-
-
Gregory Szorc authored
Parsing and writing of revision text metadata is likely identical across storage backends. Let's move the code out of revlog so we don't need to import the revlog module in order to use it. Differential Revision: https://phab.mercurial-scm.org/D4754
-
- Sep 26, 2018
-
-
Gregory Szorc authored
A recent change dropped the last user of this module. Differential Revision: https://phab.mercurial-scm.org/D4744
-
- Sep 21, 2018
-
-
Gregory Szorc authored
This attribute is only used by LFS. It is used by one of the revlog flag processor functions, which gets an instance of the revlog - not the file storage type. So, it makes sense to store this attribute on the revlog instead of the filelog. With this change, I'm pretty confident that LFS is no longer directly accessing file storage interface members that are revlog centric. i.e. it gets us one step closer to eliminating revlog-centric APIs from the file storage interface! Differential Revision: https://phab.mercurial-scm.org/D4715
-
Gregory Szorc authored
LFS is monkeypatching filelog.filelog and is then accessing various filelog attributes in the monkeypatched function. This is all fine. But some of the attributes being accessed by LFS are revlog centric and shouldn't be exposed on the file storage interface. This commit changes the monkeypatched functions to access proxied attributes on self._revlog instead of self. This should be safe to do because non-revlog repositories should not be using filelog instances: instead they should have a separate class to represent file storage. So it is reasonable for LFS to assume the _revlog attribute exists and points to a revlog. Differential Revision: https://phab.mercurial-scm.org/D4714
-
- Sep 20, 2018
-
-
Gregory Szorc authored
Now that repository loading in core supports automatically loading the lfs extension when the "lfs" requirement is present, we no longer need to update the .hg/hgrc of newly-created repos to load the lfs extension! I'm marking this as BC because it is a change in behavior. But users should not notice unless they create an LFS repo with new Mercurial and then attempt to use it with an old versions that doesn't support automatic extension loading. Differential Revision: https://phab.mercurial-scm.org/D4712
-
- Sep 19, 2018
-
-
Gregory Szorc authored
Whether LFS is enabled seems like a useful feature to expose. This will also facilitate some future work around LFS feature compatibility. Differential Revision: https://phab.mercurial-scm.org/D4710
-
- Sep 20, 2018
-
-
Gregory Szorc authored
Now that we can create a shared repository via creation options, we can handle other special actions related to share at repo creation time as well. One of the things we do after creating a shared repository is write out a .hg/shared file containing the list of additional things to share. Of which only "bookmarks" is supported. We add a creation option to hold the set of additional items to share. If items are defined, we write out the .hg/shared file at repo creation time. As part of this, we no longer hold the repo lock when writing the file. I'm pretty sure we don't care about the tiny race condition window. I'm also pretty sure the reason we used the lock was because the vfs auditor on the repo instance complained otherwise. Since the repo creation code doesn't have an audited vfs, we don't need to appease it. Because we no longer need to tell the post share hook what items are shared, the "bookmarks" argument to that function has been dropped, incurring an API change. Differential Revision: https://phab.mercurial-scm.org/D4708
-
- Sep 18, 2018
-
-
Gregory Szorc authored
When catching errors in storage, we should be catching StorageError instead of RevlogError. When throwing errors related to storage, we shouldn't be using RevlogError unless we know the error stemmed from revlogs. And we only reliably know that if we're in revlog.py or are inheriting from a type defined in revlog.py. Differential Revision: https://phab.mercurial-scm.org/D4655
-
- Sep 06, 2018
-
-
Matt Harbison authored
I noticed a "missing" blob when pushing two repositories with common blobs to a fresh server, and then running `hg verify` as a user different from the one running the web server. When pushing the second repo, several of the blobs already existed in the user cache, so the server indicated to the client that it doesn't need to upload the blobs. That's good enough for the web server process to serve up in the future. But a different user has a different cache by default, so verify complains that `lfs.url` needs to be set, because it wants to fetch the missing blobs. Aside from that corner case, it's better to keep all of the blobs in the repo whenever possible. Especially since the largefiles wiki says the user cache can be deleted at any time to reclaim disk space- users switching over may have the same expectations.
-
- Aug 25, 2018
-
-
Matt Harbison authored
-
- Aug 24, 2018
-
-
Matt Harbison authored
The search itself can take an extreme amount of time if there are a lot of revisions involved. I've got a local repo that took 6 minutes to push 1850 commits, and 60% of that time was spent here (there are ~70K files): \ 58.1% wrapper.py: extractpointers line 297: pointers = extractpointers(... | 57.7% wrapper.py: pointersfromctx line 352: for p in pointersfromctx(ct... | 57.4% wrapper.py: pointerfromctx line 397: p = pointerfromctx(ctx, f, ... \ 38.7% context.py: __contains__ line 368: if f not in ctx: | 38.7% util.py: __get__ line 82: return key in self._manifest | 38.7% context.py: _manifest line 1416: result = self.func(obj) | 38.7% manifest.py: read line 472: return self._manifestctx.re... \ 25.6% revlog.py: revision line 1562: text = rl.revision(self._node) \ 12.8% revlog.py: _chunks line 2217: bins = self._chunks(chain, ... | 12.0% revlog.py: decompressline 2112: ladd(decomp(buffer(data, ch... \ 7.8% revlog.py: checkhash line 2232: self.checkhash(text, node, ... | 7.8% revlog.py: hash line 2315: if node != self.hash(text, ... | 7.8% revlog.py: hash line 2242: return hash(text, p1, p2) \ 12.0% manifest.py: __init__ line 1565: self._data = manifestdict(t... \ 16.8% context.py: filenode line 378: if not _islfs(fctx.filelog(... | 15.7% util.py: __get__ line 706: return self._filelog | 14.8% context.py: _filelog line 1416: result = self.func(obj) | 14.8% localrepo.py: file line 629: return self._repo.file(self... | 14.8% filelog.py: __init__ line 1134: return filelog.filelog(self... | 14.5% revlog.py: __init__ line 24: censorable=True)
-
- Jul 22, 2018
-
-
Yuya Nishihara authored
I'll add a couple more functions that work on parsed tree. % wc -l mercurial/fileset*.py 559 mercurial/fileset.py 135 mercurial/filesetlang.py 694 total
-
Yuya Nishihara authored
It was added at 91aac8e6604d, but is no longer needed since a fileset expression is now compiled into an "open" matcher. See ff5b6fca1082 for details.
-
- Jun 09, 2018
-
-
Yuya Nishihara authored
This makes fileset expression open to any input, so that we can just say "hg status 'set: not binary()'" to select text files including unknowns. With this and removal of subset computation, 'set:**' becomes as fast as 'glob:**'. Further optimization will probably be possible by narrowing the file tree to compute status for example. This also fixes 'subrepo()' to not ignore the current mctx.subset. .. bc:: The fileset expression may include untracked files by default. Use ``tracked()`` to explicitly filter out files not existing at the context revision.
-
- Jun 26, 2018
-
-
Augie Fackler authored
This has consistent behavior on Python 2.7, 3.6, and 3.7 and has the benefit of probably being a little faster. Test output changes are largely because / used to be pointlessly escaped. Differential Revision: https://phab.mercurial-scm.org/D3842
-
- Jun 18, 2018
-
-
Martin von Zweigbergk authored
Differential Revision: https://phab.mercurial-scm.org/D3806
-
- May 31, 2018
-
-
Matt Harbison authored
There are only a handful of methods that access repo attributes that are applied in reposetup(). The `diff` test covers all of the commands that call scmutil.prefetchfiles(). Along the way, I saw that adding files and upgrading the repo format were also problems (also tested here). I don't think running `hg serve` through the commandserver is sane, but I conditionalized both the capabilities and the wsgirequest handler because it's trivially correct. It doesn't look like there has ever been a caller of candownload(), so there's no test for that path. The upload case isn't testable, because uploadblobs() bails if there are no pointers. The requirement should be added any time pointers are introduced, and that would force the extension to be loaded specifically for the repo. This covers `debuglfsupload`, the pre-push hook (which isn't set until the repo is promoted to LFS), and uploadblobsfromrevs(), which can be called by other extensions. I think readfromstore() and writetostore() are only reachable as a flag processor for revlog.REVIDX_EXTSTORED, and a requirement is added as soon as that is seen, so I don't think those are a problem.
-
- Apr 27, 2018
-
-
Matt Harbison authored
It wasn't obvious that LFS was involved from the error messages when `hg verify` fails.
-
- May 10, 2018
-
-
Yuya Nishihara authored
If we use pprint() as a drop-in replacement for repr(), bprefix=False is more appropriate. Let's make it the default to remove bprefix=False noise.
-
- Apr 27, 2018
-
-
Augie Fackler authored
Differential Revision: https://phab.mercurial-scm.org/D3513
-
- Apr 17, 2018
-
-
Gregory Szorc authored
We have wireprotov2server, wireprotov1peer, and wireprotov2peer. wireproto only contains server functionality. So it makes sense to rename it to wireprotov1server so the naming aligns with everything else. Differential Revision: https://phab.mercurial-scm.org/D3400
-
- Apr 06, 2018
-
-
Matt Harbison authored
The client copies all of these properties under 'header' to the HTTP Headers of the subsequent GET or PUT request that it performs. That allows the Basic HTTP authentication used to authorize the Batch API request to also authorize the upload/download action. There's likely further work to do here. There's an 'authenticated' boolean key in the Batch API response that can be set, and there is an 'LFS-Authenticate' header that is used instead of 'WWW-Authenticate'[1]. (We likely need to support both, since some hosting solutions are likely to only respond with the latter.) In any event, this works with SCM Manager, so there is real world benefit. I'm limiting the headers returned to 'Basic', because that's all the lfs spec calls out. In practice, I've seen gitbucket emit custom header content[2]. [1] https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md#response-errors [2] https://github.com/gitbucket/gitbucket/blob/35655f33c7713f08515ed640ece0948acd6d6168/src/main/scala/gitbucket/core/servlet/GitRepositoryServlet.scala#L119
-
- Apr 15, 2018
-
-
Matt Harbison authored
At this point, I think all of the core commands are prefetching, except grep and verify. Verify will need some special handling, in case the revlogs are corrupt. Grep has an issue that still needs to be debugged, but we probably need to give the behavior some thought too- it would be a shame to have to download everything in order to search. I think the benefit of having this info for all commands outweighs extra printing in a command that is arguably not well behaved in this context anyway.
-
- Apr 14, 2018
-
-
Matt Harbison authored
The remainder of the commands that need prefetch deal with multiple revisions. I initially coded this as a separate hook, but then it needed a list of files to handle `diff` and `grep`, so it didn't seem worth keeping them separate. Not every matcher will emit bad file messages (some are built from a list of files that are known to exist). But it seems better to filter this in one place than to push this on either each caller or each hook implementation.
-
- Apr 13, 2018
-
-
Matt Harbison authored
I'm not bothering with validating PUT requests (for now), because the spec doesn't explicitly call out a Content-Type (though the example transcript does use the sensible 'application/octet-stream').
-
- Feb 25, 2018
-
-
Matt Harbison authored
The aborts weren't killing the server, but this seems cleaner. I'm not sure if it matters to handle the remaining IOError in the test like this, for consistency. The error code still feels wrong (especially if the client is trying to download a corrupt blob) but I don't see anything better in the RFCs, and this is already used elsewhere because the Batch API spec specifically mentioned this as a "Validation Error".
-
- Apr 13, 2018
-
-
Matt Harbison authored
This wasn't appending the '.git/info/lfs' path in this case.
-
Matt Harbison authored
Reporting a 500 and then not leaving any traces on the server seems like a receipe for frustration. There's similar log writing in hgweb.server.do_POST(). That doesn't write directly to the wsgi.errors object, so it doesn't seem worth trying to refactor. It does seem like earlier stack frames are missing for some reason.
-
Matt Harbison authored
-
- Apr 11, 2018
-
-
Matt Harbison authored
While here, I also checked the lfs.url config directly instead of testing the scheme, as requested by Yuya.
-
- Apr 08, 2018
-
-
Matt Harbison authored
Unlike pull, the blobs are uploaded within the exchange.push() window, so simply wrap it and swap in a properly configured remote store. The '_subtoppath' field shouldn't be available during this window, but give the passed path priority for clarity. At one point I hit an AttributeError in one of the convert tests when trying to save the original remote blobstore when the swap was run unconditionally. I wrapped it in a util.safehasattr(), but then today I wasn't able to reproduce it. But now the whole thing is tucked under the requirement guard because without the requirement, there are no blobs in the repo, even if the extension is loaded.
-