- Jan 15, 2017
-
-
Gregory Szorc authored
Commit 63c68d6f5fc8de4afd9bde81b13b537beb4e47e8 from https://github.com/indygreg/python-zstandard is imported without modifications (other than removing unwanted files). This includes minor performance and feature improvements. It also changes the vendored zstd library from 1.1.1 to 1.1.2. # no-check-commit
-
- Jan 14, 2017
-
-
Pulkit Goyal authored
util.buffer() either returns inbuilt buffer function or defines a new one which slices. The inbuilt buffer() also has a length argument which is missing from the ones we defined. This patch adds that length argument.
-
- Jan 15, 2017
-
-
Pulkit Goyal authored
pycompat.getenv returns os.getenvb on py3 which is not available on Windows. This patch replaces them with encoding.environ.get and checks to ensure no new instances of os.getenv or os.setenv are introduced.
-
Yuya Nishihara authored
Otherwise TypeError would be raised. Follows up d1901c4c8ec0.
-
- Jan 14, 2017
-
-
Gregory Szorc authored
The final part of integrating the compression manager APIs into revlog storage is the plumbing for repositories to advertise they are using non-zlib storage and for revlogs to instantiate a non-zlib compression engine. The main intent of the compression manager work was to zstd all of the things. Adding zstd to revlogs has proved to be more involved than other places because revlogs are... special. Very small inputs and the use of delta chains (which are themselves a form of compression) are a completely different use case from streaming compression, which bundles and the wire protocol employ. I've conducted numerous experiments with zstd in revlogs and have yet to formalize compression settings and a storage architecture that I'm confident I won't regret later. In other words, I'm not yet ready to commit to a new mechanism for using zstd - or any other compression format - in revlogs. That being said, having some support for zstd (and other compression formats) in revlogs in core is beneficial. It can allow others to conduct experiments. This patch introduces *highly experimental* support for non-zlib compression formats in revlogs. Introduced is a config option to control which compression engine to use. Also introduced is a namespace of "exp-compression-*" requirements to denote support for non-zlib compression in revlogs. I've prefixed the namespace with "exp-" (short for "experimental") because I'm not confident of the requirements "schema" and in no way want to give the illusion of supporting these requirements in the future. I fully intend to drop support for these requirements once we figure out what we're doing with zstd in revlogs. A good portion of the patch is teaching the requirements system about registered compression engines and passing the requested compression engine as an opener option so revlogs can instantiate the proper compression engine for new operations. That's a verbose way of saying "we can now use zstd in revlogs!" On an `hg pull` conversion of the mozilla-unified repo with no extra redelta settings (like aggressivemergedeltas), we can see the impact of zstd vs zlib in revlogs: $ hg perfrevlogchunks -c ! chunk ! wall 2.032052 comb 2.040000 user 1.990000 sys 0.050000 (best of 5) ! wall 1.866360 comb 1.860000 user 1.820000 sys 0.040000 (best of 6) ! chunk batch ! wall 1.877261 comb 1.870000 user 1.860000 sys 0.010000 (best of 6) ! wall 1.705410 comb 1.710000 user 1.690000 sys 0.020000 (best of 6) $ hg perfrevlogchunks -m ! chunk ! wall 2.721427 comb 2.720000 user 2.640000 sys 0.080000 (best of 4) ! wall 2.035076 comb 2.030000 user 1.950000 sys 0.080000 (best of 5) ! chunk batch ! wall 2.614561 comb 2.620000 user 2.580000 sys 0.040000 (best of 4) ! wall 1.910252 comb 1.910000 user 1.880000 sys 0.030000 (best of 6) $ hg perfrevlog -c -d 1 ! wall 4.812885 comb 4.820000 user 4.800000 sys 0.020000 (best of 3) ! wall 4.699621 comb 4.710000 user 4.700000 sys 0.010000 (best of 3) $ hg perfrevlog -m -d 1000 ! wall 34.252800 comb 34.250000 user 33.730000 sys 0.520000 (best of 3) ! wall 24.094999 comb 24.090000 user 23.320000 sys 0.770000 (best of 3) Only modest wins for the changelog. But manifest reading is significantly faster. What's going on? One reason might be data volume. zstd decompresses faster. So given more bytes, it will put more distance between it and zlib. Another reason is size. In the current design, zstd revlogs are *larger*: debugcreatestreamclonebundle (size in bytes) zlib: 1,638,852,492 zstd: 1,680,601,332 I haven't investigated this fully, but I reckon a significant cause of larger revlogs is that the zstd frame/header has more bytes than zlib's. For very small inputs or data that doesn't compress well, we'll tend to store more uncompressed chunks than with zlib (because the compressed size isn't smaller than original). This will make revlog reading faster because it is doing less decompression. Moving on to bundle performance: $ hg bundle -a -t none-v2 (total CPU time) zlib: 102.79s zstd: 97.75s So, marginal CPU decrease for reading all chunks in all revlogs (this is somewhat disappointing). $ hg bundle -a -t <engine>-v2 (total CPU time) zlib: 191.59s zstd: 115.36s This last test effectively measures the difference between zlib->zlib and zstd->zstd for revlogs to bundle. This is a rough approximation of what a server does during `hg clone`. There are some promising results for zstd. But not enough for me to feel comfortable advertising it to users. We'll get there...
-
Gregory Szorc authored
Now that compression engines declare their header in revlog chunks and can decompress revlog chunks, we refactor revlog.decompress() to use them. Making full use of the property that revlog compressor objects are reusable, revlog instances now maintain a dict mapping an engine's revlog header to a compressor object. This is not only a performance optimization for engines where compressor object reuse can result in better performance, but it also serves as a cache of header values so we don't need to perform redundant lookups against the compression engine manager. (Yes, I measured and the overhead of a function call versus a dict lookup was observed.) Replacing the previous inline lookup table with a dict lookup was measured to make chunk reading ~2.5% slower on changelogs and ~4.5% slower on manifests. So, the inline lookup table has been mostly preserved so we don't lose performance. This is unfortunate. But many decompression operations complete in microseconds, so Python attribute lookup, dict lookup, and function calls do matter. The impact of this change on mozilla-unified is as follows: $ hg perfrevlogchunks -c ! chunk ! wall 1.953663 comb 1.950000 user 1.920000 sys 0.030000 (best of 6) ! wall 1.946000 comb 1.940000 user 1.910000 sys 0.030000 (best of 6) ! chunk batch ! wall 1.791075 comb 1.800000 user 1.760000 sys 0.040000 (best of 6) ! wall 1.785690 comb 1.770000 user 1.750000 sys 0.020000 (best of 6) $ hg perfrevlogchunks -m ! chunk ! wall 2.587262 comb 2.580000 user 2.550000 sys 0.030000 (best of 4) ! wall 2.616330 comb 2.610000 user 2.560000 sys 0.050000 (best of 4) ! chunk batch ! wall 2.427092 comb 2.420000 user 2.400000 sys 0.020000 (best of 5) ! wall 2.462061 comb 2.460000 user 2.400000 sys 0.060000 (best of 4) Changelog chunk reading is slightly faster but manifest reading is slower. What gives? On this repo, 99.85% of changelog entries are zlib compressed (the 'x' header). On the manifest, 67.5% are zlib and 32.4% are '\0'. This patch swapped the test order of 'x' and '\0' so now 'x' is tested first. This makes changelogs faster since they almost always hit the first branch. This makes a significant percentage of manifest '\0' chunks slower because that code path now performs an extra test. Yes, I too can't believe we're able to measure the impact of an if..elif with simple string compares. I reckon this code would benefit from being written in C...
-
- Jan 13, 2017
-
-
Denis Laxalde authored
There's no apparent reason to have this "entries" generator function that builds a list and then yields its elements in reverse order and which is only called to build the "entries" list. So just build the list directly, in reverse order. Adjust "parity" generator's offset to keep rendering the same.
-
- Jan 14, 2017
-
-
Gregory Szorc authored
As pointed out by Yuya, this action doesn't add much (any?) value.
-
Yuya Nishihara authored
readline() returns '' only when EOF is encountered, in which case, Python's getpass() raises EOFError. We should do the same to abort the session as "response expected." This bug was reported to https://bitbucket.org/tortoisehg/thg/issues/4659/
-
Gregory Szorc authored
When converting a Git repository to Mercurial at Mozilla, I encountered a scenario where I didn't want `hg convert` to automatically add the "committer: <committer>" line to commit messages. While I can hack around this by rewriting the Git commit before it is fed into `hg convert`, I figured it would be a useful knob to control. This patch introduces a config option that allows lots of control over the committer value. I initially implemented this as a single boolean flag to control whether to save the committer message. But then there was feedback that it would be useful to save the committer in extra data. While this patch doesn't implement support for saving in extra data, it does add a mechanism for extending which actions to take on the committer field. We should be able to easily add actions to save in extra data. Some of the implemented features weren't asked for. But I figured they could be useful. If nothing else they demonstrate the extensibility of this mechanism.
-
Gregory Szorc authored
I've probably typed `hg help mergetool` dozens of times. I'm tired of it not working.
-
- Jan 12, 2017
-
-
Matthieu Laneuville authored
The result of diffstatdata should not depend on having noprefix set or not, as was reported in issue 4755. Forcing noprefix to false on call makes sure the parser receives the diff in the correct format and returns the proper result. Another way to fix this would have been to change the regular expressions in path.diffstatdata(), but that would have introduced many unecessary special cases.
-
- Jan 13, 2017
-
-
Martin von Zweigbergk authored
Module-level @cachefunc usage is risky because it can easily create a memory "leak". Let's reject it completely for now. If a valid usage comes up in the future, we can always improve the check or reconsider.
-
Pierre-Yves David authored
To prevent Bad Things™ from happening, let's rework the logic to not use util.cachefunc.
-
- Jan 09, 2017
-
-
Sean Farley authored
Just like the summary says, this will colorize the: similarity index 88% line in the diff output.
-
Sean Farley authored
Tests have been added.
-
Sean Farley authored
This config knob will control whether or not to show the similarity calculation in the diff output: diff --git a/README.md b/foo.md similarity index 88% rename from README.md rename to foo.md --- a/README.md +++ b/foo.md
-
- Jan 08, 2017
-
-
Sean Farley authored
Future patches will use this to report the similarity of a rename / copy in the patch output.
-
- Jan 09, 2017
-
-
Yuya Nishihara authored
This slightly complicates the parsing (see the previous patch), but the overall result seems not bad. I keep x:, :y and : for future extension.
-
Yuya Nishihara authored
This allows us to handle x:y range as a general range object. A primary user of it is followlines().
-
Yuya Nishihara authored
This seems handy.
-
Yuya Nishihara authored
We have 4 revset functions that take integer arguments, and they handle their arguments in slightly different ways. This patch unifies them: - getstring() in place of getsymbol(), which is more consistent with the handling of integer revisions (both 1 and '1' are valid) - say "expects" instead of "requires" for type errors We don't need to catch TypeError since getstring() must return a string.
-
Yuya Nishihara authored
The rev argument has the same meaning as startrev of follow(), and I think startrev is more informative. followlines() is new function, we can make BC now.
-
- Jan 13, 2017
-
-
Yuya Nishihara authored
Follows up 5dd67f0993ce. Now revisions.txt and revsets.txt has been merged, so use revisions.* as a pointer.
-
- Jan 02, 2017
-
-
Gregory Szorc authored
Previously, compression engines had APIs for performing revlog compression but no mechanism to perform revlog decompression. This patch changes that. Revlog decompression is slightly more complicated than compression because in the compression case there is (currently) only a single engine that can be used at a time. However for decompression, a revlog could contain chunks from multiple compression engines. This means decompression needs to map to multiple engines and decompressors. This functionality is outside the scope of this patch. But it drives the decision for engines to declare a byte header sequence that identifies revlog data as belonging to an engine and an API for obtaining an engine from a revlog header.
-
- Jan 08, 2017
-
-
Anton Shestakov authored
I really want to have an option of toggling a selection on a line and also moving cursor down as a single keystroke. It also kinda makes sense for space key to do this, because some other curses UIs in the wild do this (e.g. various file managers, htop). So I got an idea to make a config option that defaults to False for compatibility, but allows making crecord UI a lot more useful for people with big hunks. We add this an experimental option to experiment with this behavior.
-
- Jan 02, 2017
-
-
Gregory Szorc authored
Now that the revlog has a reference to a compressor, it is possible to swap in other compression engines. So, teach `hg perfrevlogchunks` to do that. The default behavior of `hg perfrevlogchunks` is now to measure the compression performance of all compression engines implementing the revlog compressor API. This effectively adds the no-op "none" compressor and zstd (when available) into the default set. While we can't yet plug alternate compressors into revlogs, this command gives us a preview of the performance. On the mozilla-unified repository: $ hg perfrevlogchunks -c ! compress w/ none ! wall 0.115159 comb 0.110000 user 0.110000 sys 0.000000 (best of 86) ! compress w/ zlib ! wall 5.681406 comb 5.680000 user 5.680000 sys 0.000000 (best of 3) ! compress w/ zstd ! wall 2.624781 comb 2.620000 user 2.620000 sys 0.000000 (best of 4) $ hg perfrevlogchunks -m ! compress w/ none ! wall 0.124486 comb 0.120000 user 0.120000 sys 0.000000 (best of 79) ! compress w/ zlib ! wall 10.144701 comb 10.150000 user 10.150000 sys 0.000000 (best of 3) ! compress w/ zstd ! wall 4.383118 comb 4.390000 user 4.390000 sys 0.000000 (best of 3) Those numbers for zstd look promising. But they aren't the full story. For that, we'll need to look at decompression times and storage sizes. Stay tuned...
-
Gregory Szorc authored
This commit swaps in the just-added revlog compressor API into the revlog class. Instead of implementing zlib compression inline in compress(), we now store a cached-on-first-use revlog compressor on each revlog instance and invoke its "compress()" method. As part of this, revlog.compress() has been refactored a bit to use a cleaner code flow and modern formatting (e.g. avoiding parenthesis around returned tuples). On a mozilla-unified repo, here are the "compress" times for a few commands: $ hg perfrevlogchunks -c ! wall 5.772450 comb 5.780000 user 5.780000 sys 0.000000 (best of 3) ! wall 5.795158 comb 5.790000 user 5.790000 sys 0.000000 (best of 3) $ hg perfrevlogchunks -m ! wall 9.975789 comb 9.970000 user 9.970000 sys 0.000000 (best of 3) ! wall 10.019505 comb 10.010000 user 10.010000 sys 0.000000 (best of 3) Compression times did seem to slow down just a little. There are 360,210 changelog revisions and 359,342 manifest revisions. For the changelog, mean time to compress a revision increased from ~16.025us to ~16.088us. That's basically a function call or an attribute lookup. I suppose this is the price you pay for abstraction. It's so low that I'm not concerned.
-
Gregory Szorc authored
As part of "zstd all of the things," we need to teach revlogs to use non-zlib compression formats. Because we're routing all compression via the "compression manager" and "compression engine" APIs, we need to introduction functionality there for performing revlog operations. Ideally, revlog compression and decompression operations would be implemented in terms of simple "compress" and "decompress" primitives. However, there are a few considerations that make us want to have a specialized primitive for handling revlogs: 1) Performance. Revlogs tend to do compression and especially decompression operations in batches. Any overhead for e.g. instantiating a "context" for performing an operation can be noticed. For this reason, our "revlog compressor" primitive is reusable. For zstd, we reuse the same compression "context" for multiple operations. I've measured this to have a performance impact versus constructing new contexts for each operation. 2) Specialization. By having a primitive dedicated to revlog use, we can make revlog-specific choices and leave the door open for more functionality in the future. For example, the zstd revlog compressor may one day make use of dictionary compression. A future patch will introduce a decompress() on the compressor object. The code for the zlib compressor is basically copied from revlog.compress(). Although it doesn't handle the empty input case, the null first byte case, and the 'u' prefix case. These cases will continue to be handled in revlog.py once that code is ported to use this API.
-
Gregory Szorc authored
Upcoming patches will convert revlogs to use the compression engine APIs to perform all things compression. The yet-to-be-introduced APIs support a persistent "compressor" object so the same object can be reused for multiple compression operations, leading to better performance. In addition, compression engines like zstd may wish to tweak compression engine state based on the revlog (e.g. per-revlog compression dictionaries). A global and shared decompress() function will shortly no longer make much sense. So, we move decompress() to be a method of the revlog class. It joins compress() there. On the mozilla-unified repo, we can measure the impact of this change on reading performance: $ hg perfrevlogchunks -c ! chunk ! wall 1.932573 comb 1.930000 user 1.900000 sys 0.030000 (best of 6) ! wall 1.955183 comb 1.960000 user 1.930000 sys 0.030000 (best of 6) ! chunk batch ! wall 1.787879 comb 1.780000 user 1.770000 sys 0.010000 (best of 6 ! wall 1.774444 comb 1.770000 user 1.750000 sys 0.020000 (best of 6) "chunk" appeared to become slower but "chunk batch" got faster. Upon further examination by running both sets multiple times, the numbers appear to converge across all runs. This tells me that there is no perceived performance impact to this refactor.
-
Gregory Szorc authored
revlog.compress() compares the compressed size to the input size and throws away the compressed data if it is larger than the input. This is the correct thing to do, as storing compressed data that is larger than the input takes up more storage space and makes reading slower. However, the comparison was implemented inconsistently. For the streaming compression mode, we threw away the result if it was greater than or equal to the input size. But for the one-shot compression, we threw away the compression only if it was greater than the input size! This patch changes the comparison for the simple case so it is consistent with the streaming case. As a few tests demonstrate, this adds 1 byte to some revlog entries. This is because of an added 'u' header on the chunk. It seems somewhat wrong to increase the revlog size here. However, IMO the cost of 1 byte in storage is insignificant compared to the performance gains of avoiding decompression. This patch should invite questions around the heuristic for throwing away compressed data. For example, I'd argue we should be more liberal about rejecting compressed data, additionally doing so where the number of bytes saved fails to reach a threshold. But we can have this discussion another time.
-
- Jan 08, 2017
-
-
Sean Farley authored
Future patches will move the score function to the module level, so let's not shadow that.
-
- Jan 09, 2017
-
-
Sean Farley authored
Just like the summary says, this will colorize the: index 3d3ba4b65e11..57274a0f46b2 100644 line in the diff output.
-
- Dec 31, 2016
-
-
Sean Farley authored
This helps highlighting in third-party diff coloring (which assumes git output) and maintains pedantic correctness with diff --git. Tests will be added at the end of the series.
-
- Jan 09, 2017
-
-
Sean Farley authored
This config knob can take an integer between 0 and 40 or a keyword ('none', 'short', 'full') to control the length of hash to output. It will display diffs with the git index header as such, diff --git a/mercurial/mdiff.py b/mercurial/mdiff.py index 112edf7..d6b52c5 100644 We'll put this in the experimental section for now.
-
- Jan 12, 2017
-
-
Martin von Zweigbergk authored
We have specific syntax for displaying the help text for a particular revset predicate, so let's refer directly to the bisect() revset in the verbose bisect help. It seems likely that the user doesn't care about other revsets at that point, so they will probably not miss the text about the other revset predicates.
-
Martin von Zweigbergk authored
We have specific syntax for displaying the help text for a particular revset predicate, so use that instead of grepping through the full output.
-
Martin von Zweigbergk authored
"hg help revisions" and "hg help revsets" now point to the same text, so drop the revsets reference.
-
- Jan 08, 2017
-
-
Matt Harbison authored
There's no reason to duplicate this so many times, and it's likely an instance will be missed if support for a new pattern is added and documented. The stringmatcher is mostly used by revsets, though it is also used for the 'tag' related templates, and namespace filtering in the journal extension. So maybe there's a better place to document it. `hg help patterns` seems inappropriate, because that is all file pattern matching. While here, indicate how to perform case insensitive regex searches.
-
Matt Harbison authored
This is a case insensitive predicate like 'author', so it conforms to the existing behavior of performing a case insensitive regex.
-