- Jul 05, 2017
-
-
Augie Fackler authored
-
- Jun 23, 2017
-
-
Pierre-Yves David authored
The general delta heuristic to select a delta do not scale with the number of branch. The delta base is frequently too far away to be able to reuse a chain according to the "distance" criteria. This leads to insertion of larger delta (or even full text) that themselves push the bases for the next delta further away leading to more large deltas and full texts. This full text and frequent recomputation throw Mercurial performance in disarray. For example of a slightly large repository 280 000 files (2 150 000 versions) 430 000 changesets (10 000 topological heads) Number below compares repository with and without the distance criteria: manifest size: with: 21.4 GB without: 0.3 GB store size: with: 28.7 GB without 7.4 GB bundle last 15 00 revisions: with: 800 seconds 971 MB without: 50 seconds 73 MB unbundle time (of the last 15K revisions): with: 1150 seconds (~19 minutes) without: 35 seconds Similar issues has been observed in other repositories. Adding a new option or "feature" on stable is uncommon. However, given that this issues is making Mercurial practically unusable, I'm exceptionally targeting this patch for stable. What is actually needed is a full rework of the delta building and reading logic. However, that will be a longer process and churn not suitable for stable. In the meantime, we introduces a quick and dirty mitigation of this in the 'experimental' config space. The new option introduces a way to set the maximum amount of memory usable to store a diff in memory. This extend the ability for Mercurial to create chains without removing all safe guard regarding memory access. The option should be phased out when core has a more proper solution available. Setting the limit to '0' remove all limits, setting it to '-1' use the default limit (textsize x 4).
-
- Jul 04, 2017
-
-
Augie Fackler authored
We'll move this to the default branch.
-
- Jun 23, 2017
-
-
Pierre-Yves David authored
The general delta heuristic to select a delta do not scale with the number of branch. The delta base is frequently too far away to be able to reuse a chain according to the "distance" criteria. This leads to insertion of larger delta (or even full text) that themselves push the bases for the next delta further away leading to more large deltas and full texts. This full text and frequent recomputation throw Mercurial performance in disarray. For example of a slightly large repository 280 000 files (2 150 000 versions) 430 000 changesets (10 000 topological heads) Number below compares repository with and without the distance criteria: manifest size: with: 21.4 GB without: 0.3 GB store size: with: 28.7 GB without 7.4 GB bundle last 15 00 revisions: with: 800 seconds 971 MB without: 50 seconds 73 MB unbundle time (of the last 15K revisions): with: 1150 seconds (~19 minutes) without: 35 seconds Similar issues has been observed in other repositories. Adding a new option or "feature" on stable is uncommon. However, given that this issues is making Mercurial practically unusable, I'm exceptionally targeting this patch for stable. What is actually needed is a full rework of the delta building and reading logic. However, that will be a longer process and churn not suitable for stable. In the meantime, we introduces a quick and dirty mitigation of this in the 'experimental' config space. The new option introduces a way to set the maximum amount of memory usable to store a diff in memory. This extend the ability for Mercurial to create chains without removing all safe guard regarding memory access. The option should be phased out when core has a more proper solution available. Setting the limit to '0' remove all limits, setting it to '-1' use the default limit (textsize x 4).
-
- Jun 15, 2017
-
-
Martin von Zweigbergk authored
Before this patch, unbundling a stripped changeset would make it a draft (unless the parent was secret). This meant that one would lose phase information when stripping and unbundling secret changesets. The same thing was true for public changesets. While stripping public changesets is generally rare, it's done frequently by e.g. the narrowhg extension. We also include the phases in the temporary bundle, just in case stripping were to fail after that point, so the user can still restore the repo including phase information. Before this patch, the phases were left untouched during the bundling and unbundling of the temporary bundle. Only at the end of the transaction would phasecache.filterunknown() be called to remove phase roots that were no longer valid. We now need to call that also after the first stripping, i.e. before applying the temporary bundle. Otherwise unbundling the temporary bundle will cause a read of the phase cache which has stripped changesets in the cache and that fails. Like with obsmarkers, we unconditionally include the phases in the bundle when stripping (when using bundle2, such as when generaldelta is enabled). The reason for doing that for strip but not for bundle is that strip bundles are not meant to be shared outside the repo, so we don't care as much about compatibility.
-
- Oct 17, 2016
-
-
Mads Kiilerich authored
-
- Jul 17, 2016
-
-
Gregory Szorc authored
The bundle2 changegroup part has an advisory param saying how many changesets are in the part. Before this patch, we were setting this part when generating bundle2 parts via the wire protocol but not when generating local bundle2 files. A side effect of not setting the changeset count part is that progress bars don't work when applying changesets. As the tests show, this impacted clone bundles, shelve, backup bundles, `hg unbundle`, and anything touching bundle2 files. This patch adds a backdoor to allow us to pass state from changegroup generation into the unbundler. We store the number of changesets in the changegroup in this state and use it to populate the aforementioned advisory part parameter when generating the bundle2 bundle. I concede that I'm not thrilled by how state is being passed in changegroup.py (it feels a bit hacky). I would love to overhaul the rather confusing set of functions in changegroup.py with something that passes rich objects around instead of e.g. low-level generators. However, given the code freeze for 3.9 is imminent, I'd rather not undertake this endeavor right now. This feels like the easiest way to get the parameter added to the changegroup part.
-
Gregory Szorc authored
`hg debugbundle` is calling repr() on bundle2 part params, which are now util.sortdict instances. Unfortunately, repr() doesn't appear to be deterministic for util.sortdict. So, we implement one. We include the type name because that's the common convention for __repr__ implementations. Having the type name in `hg debugbundle` is a bit ugly. But it's a debug command and I don't care enough to fix it.
-
- Jul 03, 2016
-
-
Pulkit Goyal authored
This patch includes addition of absolute_import and print_function to the files where they are missing. The modern importing conventions are also followed.
-
- Dec 05, 2015
-
-
Martin von Zweigbergk authored
In the most complex case, we try using the incoming delta base, then we try both parents, and then we try the previous revlog entry. If none of these result in a good delta, we natually use the null revision as base. However, we sometimes consider the nullrev before we have exhausted our other options. Specifically, when both parents are null, we use the nullrev as delta base if it produces a good delta (according to _isgooddelta()), and we fail to try the previous revlog entry as delta base. After 20a9226bdc8a (addrevision: use general delta when the incoming base delta is bad, 2015-12-01), it can also happen for non-merge commits when the incoming delta is not good. The Firefox repo (from many months back) shrinks a tiny bit with this patch: from 1.855GB to 1.830GB (1.4%). The hg repo itself shrinks even less: by less than 0.1%. There may be repos that get larger instead. This undoes the unexplained test change in 20a9226bdc8a.
-
- Dec 02, 2015
-
-
Pierre-Yves David authored
We unify the delta selection process to be a simple three options process: - try to use the incoming delta (if lazydeltabase is on) - try to find a suitable parents to delta against (if gd is on) - try to delta against the tipmost revision The first of this option that yield a valid delta will be used. The test change in 'test-generaldelta.t' show this behavior as we use a delta against the parent instead of a full delta when the incoming delta is not suitable. This as some impact on 'test-bundle.t' because a delta somewhere changes. It does not seems to change the test semantic and have been ignored.
-
Pierre-Yves David authored
The currently used manifest is too small and cannot sustain a chain length above "1". This make testing the 'lazybasedelta' behavior hard. So we add an extra file in the manifest to help testing in the next changeset. The semantic of existing tests have been checked and is not changed.
-
- Nov 07, 2015
-
-
Pierre-Yves David authored
This test check the difference between various configurations we have to pin the type of some repositories to ensure the test is still correct when we change the default.
-
- Nov 02, 2015
-
-
Pierre-Yves David authored
This option will make repositories created as general delta by default but will not make Mercurial aggressively recompute deltas for all incoming bundle. Instead, the delta contained in the bundle will be used. This will allow us to start having general delta repositories created everywhere without triggering massive recomputation costs for all new clients cloning from old servers.
-
- Oct 18, 2015
-
-
Pierre-Yves David authored
If general delta becomes the default, we need to be explicit about what we want to be tested.
-
- Sep 29, 2015
-
-
Pierre-Yves David authored
Storing uncompressed bundle on disk would be a regression. Strip backup using bundle2 are now compressed when requested.
-
Pierre-Yves David authored
The bundle10 format (plain changegroup-01) does not support general delta and result into expensive delta re-computation when stripping. If the repository is general delta, we store backups as bundle20 containing a changegroup-02 payload. We remove the experimental feature related to strip backup bundle format because this achieve the same goal in a leaner way. Removing the experimental option is fine, that is why it experimental in the first place. Compression of these bundles are coming in later changesets.
-
- Aug 30, 2015
-
-
Durham Goode authored
This adds an option for delta'ing against both p1 and p2 when applying merge revisions and picking whichever is smallest. Some before and after stats on manifest.d size: internal large repo: before: 1.2 GB after: 930 MB mozilla-central: before: 261 MB after: 92 MB
-
- Nov 21, 2014
-
-
Durham Goode authored
Previously, if reorder was true during the creation of a changegroup bundle, it was possible that the manifest and filelogs would be reordered such that the resulting bundle filelog had a linkrev that pointed to a commit that was not the earliest instance of the filelog revision. For example: With commits: 0<-1<---3<-4 \ / --2<--- if 2 and 3 added the same version of a file, if the manifests of 2 and 3 have their order reversed, but the changelog did not, it could produce a filelog with linkrevs 0<-3 instead of 0<-2, which meant if commit 3 was stripped, it would delete that file data from the repository and commit 2 would be corrupt (as would any future pulls that tried to build upon that version of the file). The fix is to make the linkrev fixup smarter. Previously it considered the first manifest that added a file to be the first commit that added that file, which is not true. Now, for every file revision we add to the bundle we make sure we attach it to the earliest applicable linkrev.
-
- Oct 21, 2013
-
-
Matt Mackall authored
-
- Oct 01, 2013
-
-
Wojciech Lopata authored
-
- Sep 23, 2013
-
-
Matt Mackall authored
-
- Sep 20, 2013
-
-
Wojciech Lopata authored
Previously basecache was incorrectly initialized before adding the first revision from a changegroup. Basecache value influences when full revisions are stored in revlog (when using generaldelta). As a result it was possible to generate a generaldelta-revlog that could be bigger by arbitrary factor than its non-generaldelta equivalent.
-