- Oct 17, 2016
-
-
Martin von Zweigbergk authored
Found by running tests with _treeinmem (both of them) modified to be True.
-
- Oct 16, 2016
-
-
Gregory Szorc authored
Currently, the "getbundle" wire protocol command obtains a generator of data, converts it to a util.chunkbuffer, then converts it back to a generator via the protocol's groupchunks() implementation. For the SSH protocol, groupchunks() simply reads 4kb chunks then write()s the data to a file descriptor. For the HTTP protocol, groupchunks() reads 32kb chunks, feeds those into a zlib compressor, emits compressed data as it is available, and that is sent to the WSGI layer, where it is likely turned into HTTP chunked transfer chunks as is or further buffered and turned into a larger chunk. For both the SSH and HTTP protocols, there is inefficiency from using util.chunkbuffer. For SSH, emitting consistent 4kb chunks sounds nice. However, the file descriptor it is writing to is almost certainly buffered. That means that a Python .write() probably doesn't translate into exactly what is written to the I/O layer. For HTTP, we're going through an intermediate layer to zlib compress data. So all util.chunkbuffer is doing is ensuring that the chunks we feed into the zlib compressor are of uniform size. This means more CPU time in Python buffering and emitting chunks in util.chunkbuffer but fewer function calls to zlib. This patch introduces and implements a new wire protocol abstract method: compresschunks(). It is like groupchunks() except it operates on a generator instead of something with a .read(). The SSH implementation simply proxies chunks. The HTTP implementation uses zlib compression. To avoid duplicate code, the HTTP groupchunks() has been reimplemented in terms of compresschunks(). To prove this all works, the "getbundle" wire protocol command has been switched to compresschunks(). This removes the util.chunkbuffer from that command. Now, data essentially streams straight from the changegroup emitter to the wire, possibly through a zlib compressor. Generators all the way, baby. There were slim to no performance changes on the server as measured with the mozilla-central repository. This is likely because CPU time is dominated by reading revlogs, producing the changegroup, and zlib compressing the output stream. Still, this brings us a little closer to our ideal of using generators everywhere.
-
- Oct 17, 2016
-
-
Mads Kiilerich authored
destination() will scan through the whole subset and read extras for each revision to get its source.
-
- Oct 11, 2016
-
-
Gábor Stefanik authored
When working in a rotated DAG (for a graftlike merge), there can be files that are renamed both between the base and the topological CA, and between the TCA and the endpoint farther from the base. Such renames span the TCA (and thus need both passes of _checkcopies to be fully detected), but may not necessarily be divergent. Make _checkcopies return "incomplete copies" and "incomplete divergences" in this case, and let mergecopies recombine them once data from both passes of _checkcopies is available. With this patch, all known cases involving renames and grafts pass. (Developed together with Pierre-Yves David)
-
Gábor Stefanik authored
As the two _checkcopies passes' ranges are separated by tca, not base, only one of the two passes will actually encounter the base. Pass "remotebase" to the other pass to let it know not to expect passing over the base. This is required for handling a few unusual rename cases.
-
- Oct 04, 2016
-
-
Gábor Stefanik authored
We first combine incomplete copies on the two sides of the topological CA into complete copies. Any leftover incomplete copies are then combined with the incomplete divergences to reconstruct divergences spanning over the topological CA. Finally we promote any divergences falsely flagged as incomplete to full divergences. Right now, there is nothing generating incomplete copy/divergence data, so this code does nothing. Changes to _checkcopies to populate these dicts are coming later in this series.
-
- Oct 12, 2016
-
-
Gábor Stefanik authored
During a graftlike merge, _checkcopies runs from ctx to tca, possibly passing over the merge base. If there is a rename both before and after the base, then we're actually dealing with divergent renames. If there is no rename on the other side of tca, then the divergence is contained entirely in the range of one _checkcopies invocation, and should be detected "in the loop" without having to rely on the other _checkcopies pass.
-
- Aug 25, 2016
-
-
Gábor Stefanik authored
As a followup to the issue4028 series, this fixes a variant of the issue that can occur when updating with uncommited local changes. The duplicated .hgsub warning is coming from wc.dirty(). We would previously skip this call because it's only relevant when we're going to perform copy tracing, which we didn't do before. The change to the update summary line is because we now treat the rename as a proper rename (which counts as a change), rather than an add+delete pair (which counts as a change and a delete).
-
- Sep 26, 2016
-
-
Mathias De Mare authored
On systems with large repositories and slow disks, the calls to 'hg status' make autocomplete annoyingly slow. This fix makes it possible to avoid the slowdown.
-
- Aug 20, 2016
-
-
Mads Kiilerich authored
Prepare for test coverage of phase updates with future push --readonly option, both with and without actually pushing changesets.
-
- Oct 13, 2016
-
-
Gábor Stefanik authored
The algorithm of _checkcopies can only walk backwards in the DAG, never forward. Because of this, the two _checkcopies patches need to run from their respective endpoints to the TCA to cover the entire subgraph where the merge is being performed. However, detection of files new in both endpoints, as well as directory rename detection, need to run with respect to the merge base, so we need lists of new files both from the TCA's and the merge base's viewpoint to correctly detect renames in a graft-like merge scenario. (Series reworked by Pierre-Yves David)
-
- Oct 17, 2016
-
-
Pierre-Yves David authored
_computenonoverlap needs to be invoked twice during a graft, and debugging messages should be distinguishable between the two invocations
-
- Oct 13, 2016
-
-
Gábor Stefanik authored
This introduces a distinction between "merge base" and "topological common ancestor". During a regular merge, these two are identical. Graft, however, performs a merge in a rotated DAG, where the merge base will not be a common ancestor at all in the original DAG. To correctly find copies in case of a graft, we need to take both the merge base and the topological CA into account, and track any renames between them in reverse. Fortunately we can detect this in advance, see comment in the code about "backwards". This patch only supports finding non-divergent renames contained entirely between the merge base and the topological CA. Further patches are coming to support more complex cases. (Pierre-Yves David was involved in the cleanup of this patch.)
-
Gábor Stefanik authored
This will be used later in an update to _checkcopies. (Pierre-Yves David was involved in the cleanup of this patch.)
-
- Oct 12, 2016
-
-
Gábor Stefanik authored
Right now, nothing changes as a result of this, but we want to handle grafts differently from ordinary merges later. (Series developed together with Pierre-Yves David)
-
Gábor Stefanik authored
These cover all currently known cases of renames being grafted, or changes being grafted through renames. Right now, most of these cases are broken. Later patches in this series will make them behave correctly. The testcases heavily rely on each other, which would make it very difficult to separate them and add them one-by-one for each case fixed by a patch. Separating them should perhaps be a 4.1 task, if it doesn't slow down the tests too much. (Developed together with Pierre-Yves David)
-
Gábor Stefanik authored
When grafting a copy backwards through a rename, a copy is wrongly detected, which causes the graft to be applied inappropriately, in a destructive way. Make sure that the old file name really exists in the common ancestor, and bail out if it doesn't. This fixes the aggravated case of bug 5343, although the basic issue (failure to duplicate the copy information) still occurs.
-
- Oct 16, 2016
-
-
Gregory Szorc authored
Currently, exchange.getbundle() returns either a cg1unpacker or a util.chunkbuffer (in the case of bundle2). This is kinda OK, as both expose a .read() to consumers. However, localpeer.getbundle() has code inferring what the response type is based on arguments and converts the util.chunkbuffer returned in the bundle2 case to a bundle2.unbundle20 instance. This is a sign that the API for exchange.getbundle() is not ideal because it doesn't consistently return an "unbundler" instance. In addition, unbundlers mask the fact that there is an underlying generator of changegroup data. In both cg1 and bundle2, this generator is being fed into a util.chunkbuffer so it can be re-exposed as a file object. util.chunkbuffer is a nice abstraction. However, it should only be used "at the edges." This is because keeping data as a generator is more efficient than converting it to a chunkbuffer, especially if we convert that chunkbuffer back to a generator (as is the case in some code paths currently). This patch refactors exchange.getbundle() into exchange.getbundlechunks(). The new API returns an iterator of chunks instead of a file-like object. Callers of exchange.getbundle() have been updated to use the new API. There is a minor change of behavior in test-getbundle.t. This is because `hg debuggetbundle` isn't defining bundlecaps. As a result, a cg1 data stream and unpacker is being produced. This is getting fed into a new bundle20 instance via bundle2.writebundle(), which uses a backchannel mechanism between changegroup generation to add the "nbchanges" part parameter. I never liked this backchannel mechanism and I plan to remove it someday. `hg bundle` still produces the "nbchanges" part parameter, so there should be no user-visible change of behavior. I consider this "regression" a bug in `hg debuggetbundle`. And that bug is captured by an existing "TODO" in the code to use bundle2 capabilities.
-
- Oct 12, 2016
-
-
Pierre-Yves David authored
This variable was named after the common ancestor. It is actually the merge base that might differ from the common ancestor in the graft case. We rename the variable before a larger refactoring to clarify the situation. Similar rename was also applied to 'checkcopies' in a prior changeset.
-
Pierre-Yves David authored
It appears that 'mergecopies' is the function consuming these data so we move the documentation there.
-
- Oct 11, 2016
-
-
Pierre-Yves David authored
more are coming
-
Pierre-Yves David authored
The 'movewithdir' had a lot of related logic all around the 'mergecopies'. However it is actually never containing anything until the very last loop in that function. We move the (simplified) variable definition there for clarity
-
- Oct 13, 2016
-
-
Mads Kiilerich authored
Python "delstructors" are terrible - this one because it assumed that __init__ had completed before it was called. That would not necessarily be the case if the repository was read only or broken and saving the dirstate thus failed in unexpected ways. That could give confusing warnings about missing '_active' after failures. To fix that, make sure all member variables are "declared" before doing anything that possibly could fail. [Famous last words.]
-
Mads Kiilerich authored
util.filechunkiter has been using a chunk size of 64k for more than 10 years, also in years where Moore's law still was a law. It is probably ok to bump it now and perhaps get a slight win in some cases. Also, largefiles have been using 128k for a long time. Specifying that size multiple times (or forgetting to do it) seems a bit stupid. Decreasing it to 64k also seems unfortunate. Thus, we will set the default chunksize to 128k and use the default everywhere.
-
- Oct 12, 2016
-
-
Mads Kiilerich authored
Before, we would sometimes use the default iterator over large files. That iterator is line based and would add extra buffering and use odd chunk sizes which could give some overhead. copyandhash can't just apply a filechunkiter as it sometimes is passed a genuine generator when downloading remotely.
-
- Oct 14, 2016
-
-
Yuya Nishihara authored
Since we don't count null p2 revision as a parent, x^2 should never return null even if null is explicitly populated.
-
- Oct 10, 2016
-
-
Yuya Nishihara authored
Taking only the last revision is inconsistent because ancestors(set) follows all revisions given, and theoretically follow(startrev=set) == ancestors(set). I'm planning to add a support for multiple start revisions, but that won't fit to the 4.0 time frame. So reject multiple revisions now to avoid future BC. len(revs) might be slow if revs were large, but we don't care since a valid revs should have only one element.
-
- Oct 16, 2016
-
-
Gregory Szorc authored
This is similar to 58467204cac0. Not all calls into the compressor return compressed data, as the compressor may buffer compressed output internally. It is cheaper to check for empty chunks than to send empty chunks through the generator. When generating a gzip-v2 bundle of the mozilla-unified repo, this change results in 50,093 empty chunks not being sent through the generator (out of 1,902,996 total input chunks).
-
- Oct 15, 2016
-
-
Danek Duvall authored
-
- Oct 13, 2016
-
-
Danek Duvall authored
This also fixes a long-standing bug that reversed the sense of the color name and the label used to print it, which was never relevant before.
-
Danek Duvall authored
If terminfo mode is in effect, and an effect is used which is missing from the terminfo database, simply silently ignore the request, leaving the output unaffected rather than causing a crash.
-
Danek Duvall authored
If the entry in the terminfo database for your terminal is missing some attributes, it should be possible to create them on the fly without resorting to just making them a color. This change allows you to have [color] terminfo.<effect> = <code> where <effect> might be something like "dim" or "bold", and <code> is the escape sequence that would otherwise have come from a call to tigetstr(). If an escape character is needed, use "\E". Any such settings will override attributes that are present in the terminfo database.
-
- Oct 04, 2016
-
-
Stanislau Hlebik authored
During update directories are deleted as soon as they have no entries. But if current working directory is deleted then it cause problems in complex commands like 'hg split'. This commit adds a warning that will help users figure the problem faster.
-
- Oct 13, 2016
-
-
Gregory Szorc authored
PySlice_GetIndicesEx() accepts a PySliceObject* on Python 2 and a PyObject* on Python 3. Casting to PySliceObject* on Python 3 was yielding a compiler warning. So stop doing that. With this patch, I no longer see any compiler warnings when building the core extensions for Python 3!
-
Gregory Szorc authored
Without this, IS_PY3K isn't define and the preprocessor uses the incorrect module loading code, causing the module fail to load at run-time. After this patch, all our C extensions (except for watchman's) appear to import correctly in Python 3!
-
Gregory Szorc authored
I feel dirty for having to do this. But this is currently our approach for dealing with PyInt -> PyLong in Python 3 for this file. This removes a ton of compiler warnings by fixing unresolved symbols.
-
Gregory Szorc authored
More appeasing the Python 3 and compiler overlords. The code is equivalent.
-
Gregory Szorc authored
This makes a compiler warning go away on Python 3.
-
Martijn Pieters authored
-
- Oct 14, 2016
-
-
Martijn Pieters authored
The token parsing was getting unwieldy and was too naive about accessing arguments.
-