- Mar 17, 2018
-
-
Yuya Nishihara authored
Future patches will add a wrapper for a list of template mappings, which will implement a custom join() something like {join(mappings % template)}. The original join() function is broken down as follows: if hasattr(joinset, 'joinfmt'): # hybrid.join() where values must be a list or a dict joinitems((joinfmt(x) for x in values), sep) elif isinstance(joinset, templateutil.wrapped): # mappable.join() show() else: # a plain list, a generator, or a byte string; joinfmt was identity() joinset = templateutil.unwrapvalue(context, joinset) joinitems(pycompat.maybebytestr(joinset), joiner)
-
- Mar 20, 2018
-
-
Yuya Nishihara authored
-
- Mar 17, 2018
-
-
Yuya Nishihara authored
Prepares for defining join() behavior per wrapped types and getting rid of the public joinfmt attribute.
-
- Mar 18, 2018
-
-
Yuya Nishihara authored
Unlike show() and tovalue(), a base mapping isn't passed to itermaps() since it is the function to generate a partial mapping.
-
- Mar 17, 2018
-
-
Yuya Nishihara authored
-
- Apr 04, 2018
-
-
Yuya Nishihara authored
They are quite similar. Let's choose one that uses standard Python escape.
-
- Mar 31, 2018
-
-
Gregory Szorc authored
With abc interfaces, instance attributes could not satisfy @abc.abstractproperty requirements because interface conformance was tested at type creation time. When we created the abc peer interfaces, we had to make "ui" a @property to satisfy abc. Now that peer interfaces are using zope.interface and there is no import time validation (but there are tests validating instances conform to the interface), we can go back to using regular object attributes. Differential Revision: https://phab.mercurial-scm.org/D3069
-
Gregory Szorc authored
zope.interface is superior. Let's switch to it. Unlike abc, which defines interfaces through a base class, zope.interface uses different types for interfaces and for implementations. So, we had to invent some new types to hold the interfaces in order to separate the interface from its default implementation. The names here could probably be better. I've been wanting to overhaul the peer interface for a while. And wire protocol version 2 will force that work. So anticipate a refactoring of these interfaces in later commits. With this commit, we no longer test abc interfaces in test-check-interfaces.py, so code for that has been removed. Differential Revision: https://phab.mercurial-scm.org/D3068 # no-check-commit because of stream_out()
-
- Mar 30, 2018
-
-
Gregory Szorc authored
This is easier than rolling our own encoding format. As a bonus, some of our artificial limits around lengths of things went away because we are no longer using fixed length fields to hold sizes. Differential Revision: https://phab.mercurial-scm.org/D3067
-
- Apr 02, 2018
-
-
Pulkit Goyal authored
Differential Revision: https://phab.mercurial-scm.org/D3071
-
- Mar 19, 2018
-
-
Pulkit Goyal authored
Differential Revision: https://phab.mercurial-scm.org/D3070
-
- Apr 03, 2018
-
-
Martin von Zweigbergk authored
repo.changectx(x) was just a synonym for repo[x], so any extensions that fail due to this commit should switch over to that form. Differential Revision: https://phab.mercurial-scm.org/D3037
-
Martin von Zweigbergk authored
lookup() seems to be about looking up a revision based on a symbol that may come from the user (via the wire protocol), so revsymbol() is appropriate here. Differential Revision: https://phab.mercurial-scm.org/D3055
-
- Apr 02, 2018
-
-
Martin von Zweigbergk authored
Differential Revision: https://phab.mercurial-scm.org/D3054
-
Martin von Zweigbergk authored
Differential Revision: https://phab.mercurial-scm.org/D3053
-
Martin von Zweigbergk authored
Differential Revision: https://phab.mercurial-scm.org/D3052
-
Martin von Zweigbergk authored
Differential Revision: https://phab.mercurial-scm.org/D3051
-
Martin von Zweigbergk authored
Differential Revision: https://phab.mercurial-scm.org/D3050
-
Martin von Zweigbergk authored
Before this patch, "revs", in the "not base" branch, would be a list of mixed integral revnums, hex nodeids, and branch names. After this patch, they're all strings. They can still be a mix of hex nodeids and branch names, but the important thing for my future patches is that they're consistently in string form. Differential Revision: https://phab.mercurial-scm.org/D3049
-
Martin von Zweigbergk authored
Differential Revision: https://phab.mercurial-scm.org/D3048
-
- Apr 03, 2018
-
-
Martin von Zweigbergk authored
repo.lookup(x) currently simply does repo[x].node(), which supports various types of inputs. As I explained in 0194dac77c93 (scmutil: add method for looking up a context given a revision symbol, 2018-04-02), I'd like to split that up so we use the new scmutil.revsymbol() for string inputs repo[x] for integer revnums and binary nodeids. Since repo.lookup() seems to exist in order to serve peer.lookup(), I think it should be calling revsymbol. However, we have several callers that use repo.lookup() with something that's not a string, so we need to remove those first. This patch starts doing that. Many more will follow. Differential Revision: https://phab.mercurial-scm.org/D3047
-
Yuya Nishihara authored
Since we've changed to carry a similarity value by opts dict, it makes sense to leave a string '0'-'100' value unmodified.
-
Augie Fackler authored
Will fix infinitepush tests that have been failing when run without --local. Differential Revision: https://phab.mercurial-scm.org/D3038
-
Martin von Zweigbergk authored
This was one of few remaining uses of repo.changectx() in core. Differential Revision: https://phab.mercurial-scm.org/D3036
-
Martin von Zweigbergk authored
This was one of few remaining uses of repo.changectx() in core. Differential Revision: https://phab.mercurial-scm.org/D3035
-
Martin von Zweigbergk authored
This was one of few remaining uses of repo.changectx() in core. Differential Revision: https://phab.mercurial-scm.org/D3034
-
Martin von Zweigbergk authored
The two forms are synonymous and the new form is by far the more common form. Differential Revision: https://phab.mercurial-scm.org/D3033
-
- Mar 28, 2018
-
-
Gregory Szorc authored
This is inspired by the pprint() module/function (which we can't use because the output is different on Python 2 and 3 - namely the use of b'' literals). We hook it up to `hg debugwireproto` for printing the response to a wire protocol command. This foreshadows future peer work, which will support decoding CBOR responses into rich data structures. Differential Revision: https://phab.mercurial-scm.org/D2987
-
Gregory Szorc authored
We may eventually want a separate frame type for this. But for now this is the easiest to implement. Differential Revision: https://phab.mercurial-scm.org/D2986
-
Gregory Szorc authored
This version won't print the full payload (which could be large). It also prints human friendly values for types and flags. Differential Revision: https://phab.mercurial-scm.org/D2985
-
Gregory Szorc authored
This is part of the standard I/O interface. It is used by the framing protocol. So we need to implement it so frames can be decoded. Differential Revision: https://phab.mercurial-scm.org/D2984
-
- Mar 23, 2018
-
-
Gregory Szorc authored
zope.interface is superior to the abc module. Let's port to it. As part of this, we add tests for interface conformance for classes implementing the interface. Differential Revision: https://phab.mercurial-scm.org/D2983
-
- Mar 28, 2018
-
-
Gregory Szorc authored
We can't easily reuse existing command handlers for version 2 commands because the response types will be different. e.g. many commands return nodes encoded as hex. Our new wire protocol is binary safe, so we'll wish to encode nodes as binary. We /could/ teach each command handler to look at the protocol handler and change behavior based on the version in use. However, this would make logic a bit unwieldy over time and would make it harder to design a unified protocol handler interface. I think it's better to create a clean break between version 1 and version 2 of commands on the server. What I imagine happening is we will have separate @wireprotocommand functions for each protocol generation. Those functions will parse the request, dispatch to a common function to process it, then generate the response in its own, transport-specific manner. This commit establishes a separate table for tracking version 1 commands from version 2 commands. The HTTP server pieces have been updated to use this new table. Most commands are marked as both version 1 and version 2, so there is little practical impact to this change. A side-effect of this change is we now rely on transport registration in wireprototypes.TRANSPORTS and certain properties of the protocol interface. So a test had to be updated to conform. Differential Revision: https://phab.mercurial-scm.org/D2982
-
Gregory Szorc authored
The version component is used for filtering/routing wire protocol commands to their proper handler. The actual version 2 of the wire protocol commands will use a different encoding of responses. We already have tests using the version 2 SSH transport and version 2 of the wire protocol commands won't be implemented atomically. This commit marks the SSHv2 transport as version 1 so it will still invoke the version 1 commands. Once the commands are all implemented in version 2, we can restore its proper behavior. Some tests had to be disabled as a result of this change. Differential Revision: https://phab.mercurial-scm.org/D2981
-
Gregory Szorc authored
We generally shy away from aliasing module symbols. I think I was keeping this around for API compatibility. We've already made tons of other API breaks in the wire protocol code this release. What's one more? .. api:: ``wireproto`` module no longer re-exports various types used to define responses to wire protocol commands. Access these types from the ``wireprototypes`` module. Differential Revision: https://phab.mercurial-scm.org/D2979
-
- Mar 26, 2018
-
-
Gregory Szorc authored
Now that we're using CBOR in the new wire protocol, let's convert command requests to it. Before I wrote this patch and was even thinking about CBOR, I was thinking about how commands should be issued and came to the conclusion that we didn't need separate frames to represent the command name from its arguments. I already had a partially completed patch prepared to merge the frames. But with CBOR, it makes the implementation a bit simpler because we don't need to roll our own serialization. The changes here are a bit invasive. I tried to split this into multiple commits to make it easier to review. But it was just too hard. * "command name" and "command argument" frames have been collapsed into a "command request" frame. * The flags for this new frame are totally different. * Frame processing has been overhauled to reflect the new order of things. * Test fallout was significant. A handful of tests were removed. Altogether, I think the new code is simpler. We don't have complicated state around receiving commands. We're either receiving command request frames or command data frames. We /could/ potentially collapse command data frames into command request frames. Although I'd have to think a bit more about this before I do it. Differential Revision: https://phab.mercurial-scm.org/D2951
-
Gregory Szorc authored
Today, a long-running operation on a server may run without any sign of progress on the client. This can lead to the conclusion that the server has hung or the connection has dropped. In fact, connections can and do time out due to inactivity. And a long-running server operation can result in the connection dropping prematurely because no data is being sent! While we're inventing the new wire protocol, let's provide a mechanism for communicating progress on potentially expensive server-side events. We introduce a new frame type that conveys "progress" updates. This frame type essentially holds the data required to formulate a ``ui.progress()`` call. We only define the frame right now. Implementing it will be a bit of work since there is no analog to progress frames in the existing wire protocol. We'll need to teach the ui object to write to the wire protocol, etc. The use of a CBOR map may seem wasteful, as this will encode key names in every frame. This *is* wasteful. However, maps are extensible. And the intent is to always use compression via streams. Compression will make the overhead negligible since repeated strings will be mostly eliminated over the wire. Differential Revision: https://phab.mercurial-scm.org/D2902
-
- Mar 28, 2018
-
-
Gregory Szorc authored
We just vendored a library for encoding and decoding the CBOR data format. While the intent of that vendor was to support state files, CBOR is really a nice data format. It is extensible and compact. I've been feeling dirty inventing my own data formats for frame payloads. While custom formats can always beat out a generic format, there is a cost to be paid in terms of implementation, comprehension, etc. CBOR is compact enough that I'm not too worried about efficiency loss. I think the benefits of using a standardized format outweigh rolling our own formats. So I plan to make heavy use of CBOR in the wire protocol going forward. This commit introduces support for encoding CBOR data in frame payloads to our function to make a frame from a human string. We do need to employ some low-level Python code in order to evaluate a string as a Python expression. But other than that, this should hopefully be pretty straightforward. Unit tests for this function have been added. Differential Revision: https://phab.mercurial-scm.org/D2948
-
- Mar 26, 2018
-
-
Gregory Szorc authored
It is better to create outgoing streams through the reactor so the reactor knows about what streams are active and can track them accordingly. Test output changes slightly because frames from subsequent responses no longer have the "stream begin" stream flag set because the stream is now used across all responses. Differential Revision: https://phab.mercurial-scm.org/D2947
-
Gregory Szorc authored
Previously, the frame-based protocol was just a series of frames, with each frame associated with a request ID. In order to scale the protocol, we'll want to enable the use of compression. While it is possible to enable compression at the socket/pipe level, this has its disadvantages. The big one is it undermines the point of frames being standalone, atomic units that can be read and written: if you add compression above the framing protocol, you are back to having a stream-based protocol as opposed to something frame-based. So in order to preserve frames, compression needs to occur at the frame payload level. Compressing each frame's payload individually will limit compression ratios because the window size of the compressor will be limited by the max frame size, which is 32-64kb as currently defined. It will also add CPU overhead, as it is more efficient for compressors to operate on fewer, larger blocks of data than more, smaller blocks. So compressing each frame independently is out. This means we need to compress each frame's payload as if it is part of a larger stream. The simplest approach is to have 1 stream per connection. This could certainly work. However, it has disadvantages (documented below). We could also have 1 stream per RPC/command invocation. (This is the model HTTP/2 goes with.) This also has disadvantages. The main disadvantage to one global stream is that it has the very real potential to create CPU bottlenecks doing compression. Networks are only getting faster and the performance of single CPU cores has been relatively flat. Newer compression formats like zstandard offer better CPU cycle efficiency than predecessors like zlib. But it still all too common to saturate your CPU with compression overhead long before you saturate the network pipe. The main disadvantage with streams per request is that you can't reap the benefits of the compression context for multiple requests. For example, if you send 1000 RPC requests (or HTTP/2 requests for that matter), the response to each would have its own compression context. The overall size of the raw responses would be larger because compression contexts wouldn't be able to reference data from another request or response. The approach for streams as implemented in this commit is to support N streams per connection and for streams to potentially span requests and responses. As explained by the added internals docs, this facilitates servers and clients delegating independent streams and compression to independent threads / CPU cores. This helps alleviate the CPU bottleneck of compression. This design also allows compression contexts to be reused across requests/responses. This can result in improved compression ratios and less overhead for compressors and decompressors having to build new contexts. Another feature that was defined was the ability for individual frames within a stream to declare whether that individual frame's payload uses the content encoding (read: compression) defined by the stream. The idea here is that some servers may serve data from a combination of caches and dynamic resolution. Data coming from caches may be pre-compressed. We want to facilitate servers being able to essentially stream bytes from caches to the wire with minimal overhead. Being able to mix and match with frames are compressed within a stream enables these types of advanced server functionality. This commit defines the new streams mechanism. Basic code for supporting streams in frames has been added. But that code is seriously lacking and doesn't fully conform to the defined protocol. For example, we don't close any streams. And support for content encoding within streams is not yet implemented. The change was rather invasive and I didn't think it would be reasonable to implement the entire feature in a single commit. For the record, I would have loved to reuse an existing multiplexing protocol to build the new wire protocol on top of. However, I couldn't find a protocol that offers the performance and scaling characteristics that I desired. Namely, it should support multiple compression contexts to facilitate scaling out to multiple CPU cores and compression contexts should be able to live longer than single RPC requests. HTTP/2 *almost* fits the bill. But the semantics of HTTP message exchange state that streams can only live for a single request-response. We /could/ tunnel on top of HTTP/2 streams and frames with HEADER and DATA frames. But there's no guarantee that HTTP/2 libraries and proxies would allow us to use HTTP/2 streams and frames without the HTTP message exchange semantics defined in RFC 7540 Section 8. Other RPC protocols like gRPC tunnel are built on top of HTTP/2 and thus preserve its semantics of stream per RPC invocation. Even QUIC does this. We could attempt to invent a higher-level stream that spans HTTP/2 streams. But this would be violating HTTP/2 because there is no guarantee that HTTP/2 streams are routed to the same server. The best we can do - which is what this protocol does - is shoehorn all request and response data into a single HTTP message and create streams within. At that point, we've defined a Content-Type in HTTP parlance. It just so happens our media type can also work as a standalone, stream-based protocol, without leaning on HTTP or similar protocol. Differential Revision: https://phab.mercurial-scm.org/D2907
-