Skip to content
Snippets Groups Projects
  1. Jan 13, 2017
  2. Jan 02, 2017
    • Gregory Szorc's avatar
      util: compression APIs to support revlog decompression · f50c0db5
      Gregory Szorc authored
      Previously, compression engines had APIs for performing revlog
      compression but no mechanism to perform revlog decompression. This
      patch changes that.
      
      Revlog decompression is slightly more complicated than compression
      because in the compression case there is (currently) only a single
      engine that can be used at a time. However for decompression, a
      revlog could contain chunks from multiple compression engines. This
      means decompression needs to map to multiple engines and
      decompressors. This functionality is outside the scope of this patch.
      But it drives the decision for engines to declare a byte header
      sequence that identifies revlog data as belonging to an engine and
      an API for obtaining an engine from a revlog header.
      f50c0db5
  3. Jan 08, 2017
    • Anton Shestakov's avatar
      crecord: add an experimental option for space key to move cursor down · 0bde7372
      Anton Shestakov authored
      I really want to have an option of toggling a selection on a line and also
      moving cursor down as a single keystroke. It also kinda makes sense for space
      key to do this, because some other curses UIs in the wild do this (e.g. various
      file managers, htop). So I got an idea to make a config option that defaults to
      False for compatibility, but allows making crecord UI a lot more useful for
      people with big hunks.
      
      We add this an experimental option to experiment with this behavior.
      0bde7372
  4. Jan 02, 2017
    • Gregory Szorc's avatar
      perf: support multiple compression engines in perfrevlogchunks · 168ef0a4
      Gregory Szorc authored
      Now that the revlog has a reference to a compressor, it is
      possible to swap in other compression engines. So, teach
      `hg perfrevlogchunks` to do that.
      
      The default behavior of `hg perfrevlogchunks` is now to measure the
      compression performance of all compression engines implementing the
      revlog compressor API. This effectively adds the no-op "none"
      compressor and zstd (when available) into the default set.
      
      While we can't yet plug alternate compressors into revlogs, this
      command gives us a preview of the performance. On the mozilla-unified
      repository:
      
      $ hg perfrevlogchunks -c
      ! compress w/ none
      ! wall 0.115159 comb 0.110000 user 0.110000 sys 0.000000 (best of 86)
      ! compress w/ zlib
      ! wall 5.681406 comb 5.680000 user 5.680000 sys 0.000000 (best of 3)
      ! compress w/ zstd
      ! wall 2.624781 comb 2.620000 user 2.620000 sys 0.000000 (best of 4)
      
      $ hg perfrevlogchunks -m
      ! compress w/ none
      ! wall 0.124486 comb 0.120000 user 0.120000 sys 0.000000 (best of 79)
      ! compress w/ zlib
      ! wall 10.144701 comb 10.150000 user 10.150000 sys 0.000000 (best of 3)
      ! compress w/ zstd
      ! wall 4.383118 comb 4.390000 user 4.390000 sys 0.000000 (best of 3)
      
      Those numbers for zstd look promising. But they aren't the full story.
      For that, we'll need to look at decompression times and storage sizes.
      Stay tuned...
      168ef0a4
    • Gregory Szorc's avatar
      revlog: use compression engine API for compression · 78ac56ae
      Gregory Szorc authored
      This commit swaps in the just-added revlog compressor API into
      the revlog class.
      
      Instead of implementing zlib compression inline in compress(), we
      now store a cached-on-first-use revlog compressor on each revlog
      instance and invoke its "compress()" method.
      
      As part of this, revlog.compress() has been refactored a bit to use
      a cleaner code flow and modern formatting (e.g. avoiding
      parenthesis around returned tuples).
      
      On a mozilla-unified repo, here are the "compress" times for a few
      commands:
      
      $ hg perfrevlogchunks -c
      ! wall 5.772450 comb 5.780000 user 5.780000 sys 0.000000 (best of 3)
      ! wall 5.795158 comb 5.790000 user 5.790000 sys 0.000000 (best of 3)
      
      $ hg perfrevlogchunks -m
      ! wall 9.975789 comb 9.970000 user 9.970000 sys 0.000000 (best of 3)
      ! wall 10.019505 comb 10.010000 user 10.010000 sys 0.000000 (best of 3)
      
      Compression times did seem to slow down just a little. There are
      360,210 changelog revisions and 359,342 manifest revisions. For the
      changelog, mean time to compress a revision increased from ~16.025us to
      ~16.088us. That's basically a function call or an attribute lookup. I
      suppose this is the price you pay for abstraction. It's so low that
      I'm not concerned.
      78ac56ae
    • Gregory Szorc's avatar
      util: compression APIs to support revlog compression · 31e1f0d4
      Gregory Szorc authored
      As part of "zstd all of the things," we need to teach revlogs to
      use non-zlib compression formats. Because we're routing all compression
      via the "compression manager" and "compression engine" APIs, we need to
      introduction functionality there for performing revlog operations.
      
      Ideally, revlog compression and decompression operations would be
      implemented in terms of simple "compress" and "decompress" primitives.
      However, there are a few considerations that make us want to have a
      specialized primitive for handling revlogs:
      
      1) Performance. Revlogs tend to do compression and especially
         decompression operations in batches. Any overhead for e.g.
         instantiating a "context" for performing an operation can be
         noticed. For this reason, our "revlog compressor" primitive is
         reusable. For zstd, we reuse the same compression "context" for
         multiple operations. I've measured this to have a performance
         impact versus constructing new contexts for each operation.
      
      2) Specialization. By having a primitive dedicated to revlog use,
         we can make revlog-specific choices and leave the door open for
         more functionality in the future. For example, the zstd revlog
         compressor may one day make use of dictionary compression.
      
      A future patch will introduce a decompress() on the compressor
      object.
      
      The code for the zlib compressor is basically copied from
      revlog.compress(). Although it doesn't handle the empty input
      case, the null first byte case, and the 'u' prefix case. These
      cases will continue to be handled in revlog.py once that code is
      ported to use this API.
      31e1f0d4
    • Gregory Szorc's avatar
      revlog: move decompress() from module to revlog class (API) · b6f455a6
      Gregory Szorc authored
      Upcoming patches will convert revlogs to use the compression engine
      APIs to perform all things compression. The yet-to-be-introduced
      APIs support a persistent "compressor" object so the same object
      can be reused for multiple compression operations, leading to
      better performance. In addition, compression engines like zstd
      may wish to tweak compression engine state based on the revlog
      (e.g. per-revlog compression dictionaries).
      
      A global and shared decompress() function will shortly no longer
      make much sense. So, we move decompress() to be a method of the
      revlog class. It joins compress() there.
      
      On the mozilla-unified repo, we can measure the impact of this change
      on reading performance:
      
      $ hg perfrevlogchunks -c
      ! chunk
      ! wall 1.932573 comb 1.930000 user 1.900000 sys 0.030000 (best of 6)
      ! wall 1.955183 comb 1.960000 user 1.930000 sys 0.030000 (best of 6)
      ! chunk batch
      ! wall 1.787879 comb 1.780000 user 1.770000 sys 0.010000 (best of 6
      ! wall 1.774444 comb 1.770000 user 1.750000 sys 0.020000 (best of 6)
      
      "chunk" appeared to become slower but "chunk batch" got faster. Upon
      further examination by running both sets multiple times, the numbers
      appear to converge across all runs. This tells me that there is no
      perceived performance impact to this refactor.
      b6f455a6
    • Gregory Szorc's avatar
      revlog: make compressed size comparisons consistent · 4215dc1b
      Gregory Szorc authored
      revlog.compress() compares the compressed size to the input size
      and throws away the compressed data if it is larger than the input.
      This is the correct thing to do, as storing compressed data that
      is larger than the input takes up more storage space and makes reading
      slower.
      
      However, the comparison was implemented inconsistently. For the
      streaming compression mode, we threw away the result if it was
      greater than or equal to the input size. But for the one-shot
      compression, we threw away the compression only if it was greater
      than the input size!
      
      This patch changes the comparison for the simple case so it is
      consistent with the streaming case.
      
      As a few tests demonstrate, this adds 1 byte to some revlog entries.
      This is because of an added 'u' header on the chunk. It seems
      somewhat wrong to increase the revlog size here. However, IMO the cost
      of 1 byte in storage is insignificant compared to the performance gains
      of avoiding decompression. This patch should invite questions around
      the heuristic for throwing away compressed data. For example, I'd argue
      we should be more liberal about rejecting compressed data, additionally
      doing so where the number of bytes saved fails to reach a threshold.
      But we can have this discussion another time.
      4215dc1b
  5. Jan 08, 2017
  6. Jan 09, 2017
  7. Dec 31, 2016
    • Sean Farley's avatar
      patch: add index line for diff output · b8ad243f
      Sean Farley authored
      This helps highlighting in third-party diff coloring (which assumes git
      output) and maintains pedantic correctness with diff --git.
      
      Tests will be added at the end of the series.
      b8ad243f
  8. Jan 09, 2017
    • Sean Farley's avatar
      patch: add config knob for displaying the index header · d1901c4c
      Sean Farley authored
      This config knob can take an integer between 0 and 40 or a
      keyword ('none', 'short', 'full') to control the length of hash to
      output. It will display diffs with the git index header as such,
      
        diff --git a/mercurial/mdiff.py b/mercurial/mdiff.py
        index 112edf7..d6b52c5 100644
      
      We'll put this in the experimental section for now.
      d1901c4c
  9. Jan 12, 2017
  10. Jan 08, 2017
    • Matt Harbison's avatar
      help: eliminate duplicate text for revset string patterns · 5dd67f09
      Matt Harbison authored
      There's no reason to duplicate this so many times, and it's likely an instance
      will be missed if support for a new pattern is added and documented.  The
      stringmatcher is mostly used by revsets, though it is also used for the 'tag'
      related templates, and namespace filtering in the journal extension.  So maybe
      there's a better place to document it.  `hg help patterns` seems inappropriate,
      because that is all file pattern matching.
      
      While here, indicate how to perform case insensitive regex searches.
      5dd67f09
    • Matt Harbison's avatar
      revset: add regular expression support to 'desc' · 931a6088
      Matt Harbison authored
      This is a case insensitive predicate like 'author', so it conforms to the
      existing behavior of performing a case insensitive regex.
      931a6088
  11. Jan 12, 2017
  12. Nov 25, 2016
    • Gregory Szorc's avatar
      repair: clean up stale lock file from store backup · f2c069bf
      Gregory Szorc authored
      Since we did a directory rename on the stores, the source
      repository's lock path now references the dest repository's
      lock path and the dest repository's lock path now references
      a non-existent filename.
      
      So releasing the lock on the source will unlock the dest and
      releasing the lock on the dest will no-op because it fails due
      to file not found. So we clean up the dest's lock manually.
      f2c069bf
    • Gregory Szorc's avatar
      repair: copy non-revlog store files during upgrade · 2603d048
      Gregory Szorc authored
      The store contains more than just revlogs. This patch teaches the
      upgrade code to copy regular files as well.
      
      As the test changes demonstrate, the phaseroots file is now copied.
      2603d048
  13. Dec 19, 2016
    • Gregory Szorc's avatar
      repair: migrate revlogs during upgrade · 38aa1ca9
      Gregory Szorc authored
      Our next step for in-place upgrade is to migrate store data. Revlogs
      are the biggest source of data within the store and a store is useless
      without them, so we implement their migration first.
      
      Our strategy for migrating revlogs is to walk the store and call
      `revlog.clone()` on each revlog. There are some minor complications.
      
      Because revlogs have different storage options (e.g. changelog has
      generaldelta and delta chains disabled), we need to obtain the
      correct class of revlog so inserted data is encoded properly for its
      type.
      
      Various attempts at implementing progress indicators that didn't lead
      to frustration from false "it's almost done" indicators were made.
      
      I initially used a single progress bar based on number of revlogs.
      However, this quickly churned through all filelogs, got to 99% then
      effectively froze at 99.99% when it got to the manifest.
      
      So I converted the progress bar to total revision count. This was a
      little bit better. But the manifest was still significantly slower
      than filelogs and it took forever to process the last few percent.
      
      I then tried both revision/chunk bytes and raw bytes as the
      denominator. This had the opposite effect: because so much data is in
      manifests, it would churn through filelogs without showing much
      progress. When it got to manifests, it would fill in 90+% of the
      progress bar.
      
      I finally gave up having a unified progress bar and instead implemented
      3 progress bars: 1 for filelog revisions, 1 for manifest revisions, and
      1 for changelog revisions. I added extra messages indicating the total
      number of revisions of each so users know there are more progress bars
      coming.
      
      I also added extra messages before and after each stage to give extra
      details about what is happening. Strictly speaking, this isn't
      necessary. But the numbers are impressive. For example, when converting
      a non-generaldelta mozilla-central repository, the messages you see are:
      
         migrating 2475593 total revisions (1833043 in filelogs, 321156 in manifests, 321394 in changelog)
         migrating 1.67 GB in store; 2508 GB tracked data
         migrating 267868 filelogs containing 1833043 revisions (1.09 GB in store; 57.3 GB tracked data)
         finished migrating 1833043 filelog revisions across 267868 filelogs; change in size: -415776 bytes
         migrating 1 manifests containing 321156 revisions (518 MB in store; 2451 GB tracked data)
      
      That "2508 GB" figure really blew me away. I had no clue that the raw
      tracked data in mozilla-central was that large. Granted, 2451 GB is in
      the manifest and "only" 57.3 GB is in filelogs. But still.
      
      It's worth noting that gratuitous loading of source revlogs in order
      to display numbers and progress bars does serve a purpose: it ensures
      we can open all source revlogs. We don't want to spend several minutes
      copying revlogs only to encounter a permissions error or similar later.
      
      As part of this commit, we also add swapping of the store directory
      to the upgrade function. After revlogs are converted, we move the
      old store into the backup directory then move the temporary repo's
      store into the old store's location. On well-behaved systems, this
      should be 2 atomic operations and the window of inconsistency show be
      very narrow.
      
      There are still a few improvements to be made to store copying and
      upgrading. But this commit gets the bulk of the work out of the way.
      38aa1ca9
    • Gregory Szorc's avatar
      revlog: add clone method · 1c7368d1
      Gregory Szorc authored
      Upcoming patches will introduce functionality for in-place
      repository/store "upgrades." Copying the contents of a revlog
      feels sufficiently low-level to warrant being in the revlog
      class. So this commit implements that functionality.
      
      Because full delta recomputation can be *very* expensive (we're
      talking several hours on the Firefox repository), we support
      multiple modes of execution with regards to delta (re)use. This
      will allow repository upgrades to choose the "level" of
      processing/optimization they wish to perform when converting
      revlogs.
      
      It's not obvious from this commit, but "addrevisioncb" will be
      used for progress reporting.
      1c7368d1
    • Gregory Szorc's avatar
      repair: begin implementation of in-place upgrading · 7de7afd8
      Gregory Szorc authored
      Now that all the upgrade planning work is in place, we can start
      doing the real work: actually upgrading a repository.
      
      The main goal of this commit is to get the "framework" for running
      in-place upgrade actions in place.
      
      Rather than get too clever and low-level with regards to in-place
      upgrades, our strategy is to create a new, temporary repository,
      copy data to it, then replace the old data with the new. This allows
      us to reuse a lot of code in localrepo.py around store interaction,
      which will eventually consume the bulk of the upgrade code.
      
      But we have to start small. This patch implements adding new
      repository requirements. But it still sets up a temporary
      repository and locks it and the source repo before performing the
      requirements file swap. This means all the plumbing is in place
      to implement store copying in subsequent commits.
      7de7afd8
    • Gregory Szorc's avatar
      repair: determine what upgrade will do · 3997edc4
      Gregory Szorc authored
      This commit introduces code for determining what actions/improvements
      an upgrade should perform.
      
      The "upgradefindimprovements" function introduces a mechanism to
      return a list of improvements that can be made to a repository.
      Each improvement is effectively an action that an upgrade will
      perform. Associated with each of these improvements is metadata
      that will be used to inform users what's wrong and what an
      upgrade will do.
      
      Each "improvement" is categorized as a "deficiency" or an
      "optimization." TBH, I'm not thrilled about the terminology and
      am receptive to constructive bikeshedding. The main difference
      between a "deficiency" and an "optimization" is a deficiency
      is always corrected (if it deviates from the current config) and
      an "optimization" is an optional action that goes above and beyond
      to improve the state of the repository (usually by requiring more
      CPU during upgrade).
      
      Our initial set of improvements identifies missing repository
      requirements, a single, easily correctable problem with
      changelog storage, and a set of "optimizations" related to delta
      recalculation.
      
      The main "upgraderepo" function has been expanded to handle
      improvements. It queries for the list of improvements and determines
      which of them will run based on the current repository state and user
      
      I went through numerous iterations of the output format before
      settling on a ReST-inspired definition list format. (I used
      bulleted lists in the first submission of this commit and could
      not get it to format just right.) Even with the various iterations,
      I'm still not super thrilled with the format. But, this is a debug*
      command, so that should mean we can refine the output without BC
      concerns.
      3997edc4
    • Gregory Szorc's avatar
      repair: implement requirements checking for upgrades · 513d68a9
      Gregory Szorc authored
      This commit introduces functionality for upgrading a repository in
      place. The first part that's implemented is testing for upgrade
      "compatibility." This is done by examining repository requirements.
      
      There are 5 functions returning sets of requirements that control
      upgrading. Why so many functions? Mainly to support extensions.
      Functions are easier to monkeypatch than module variables.
      
      Astute readers will see that we don't support "manifestv2" and
      "treemanifest" requirements in the upgrade mechanism. I don't have
      a great answer for why other than this is a complex set of patches
      and I don't want to deal with the complexity of these experimental
      features just yet. We can teach the upgrade mechanism about them
      later, once the basic upgrade mechanism is in place.
      
      This commit also introduces the "upgraderepo" function. This will be
      our main routine for performing an in-place upgrade. Currently, it
      just implements requirements checking. The structure of some code in
      this function may look a bit weird (e.g. the inline function that is
      only called once). But this will make sense after future commits.
      513d68a9
  14. Nov 25, 2016
    • Gregory Szorc's avatar
      debugcommands: stub for debugupgraderepo command · eaa56071
      Gregory Szorc authored
      Currently, if Mercurial introduces a new repository/store feature or
      changes behavior of an existing feature, users must perform an
      `hg clone` to create a new repository with hopefully the
      correct/optimal settings. Unfortunately, even `hg clone` may not
      give the correct results. For example, if you do a local `hg clone`,
      you may get hardlinks to revlog files that inherit the old state.
      If you `hg clone` from a remote or `hg clone --pull`, changegroup
      application may bypass some optimization, such as converting to
      generaldelta.
      
      Optimizing a repository is harder than it seems and requires more
      than a simple `hg` command invocation.
      
      This commit starts the process of changing that. We introduce
      `hg debugupgraderepo`, a command that performs an in-place upgrade
      of a repository to use new, optimal features. The command is just
      a stub right now. Features will be added in subsequent commits.
      
      This commit does foreshadow some of the behavior of the new command,
      notably that it doesn't do anything by default and that it takes
      arguments that influence what actions it performs. These will be
      explained more in subsequent commits.
      eaa56071
  15. Jan 12, 2017
  16. Jan 11, 2017
    • Martin von Zweigbergk's avatar
      help: merge revsets.txt into revisions.txt · e520f0f4
      Martin von Zweigbergk authored
      Selecting single and multiple revisions is closely related, so let's
      put it in one place, so users can easily find it. We actually did not
      even point to "hg help revsets" from "hg help revisions", but now that
      they're on a single page, that won't be necessary.
      e520f0f4
    • Martin von Zweigbergk's avatar
      tests: use `hg help dates` instead of `hg help revs` in test · 43839a24
      Martin von Zweigbergk authored
      The revisions help is already long and will get longer, so switch to
      another short and stable topic.
      43839a24
    • Martin von Zweigbergk's avatar
      help: use a single paragraph to describe full and abbreviated nodeids · bbb5cc55
      Martin von Zweigbergk authored
      The texts describing 40-digit strings and the abbreviated form are
      closely related, so make it a single paragraph.
      bbb5cc55
    • Gregory Szorc's avatar
      hgweb: support Content Security Policy · d7bf7d2b
      Gregory Szorc authored
      Content-Security-Policy (CSP) is a web security feature that allows
      servers to declare what loaded content is allowed to do. For example,
      a policy can prevent loading of images, JavaScript, CSS, etc unless
      the source of that content is whitelisted (by hostname, URI scheme,
      hashes of content, etc). It's a nifty security feature that provides
      extra mitigation against some attacks, notably XSS.
      
      Mitigation against these attacks is important for Mercurial because
      hgweb renders repository data, which is commonly untrusted. While we
      make attempts to escape things, etc, there's the possibility that
      malicious data could be injected into the site content. If this happens
      today, the full power of the web browser is available to that
      malicious content. A restrictive CSP policy (defined by the server
      operator and sent in an HTTP header which is outside the control of
      malicious content), could restrict browser capabilities and mitigate
      security problems posed by malicious data.
      
      CSP works by emitting an HTTP header declaring the policy that browsers
      should apply. Ideally, this header would be emitted by a layer above
      Mercurial (likely the HTTP server doing the WSGI "proxying"). This
      works for some CSP policies, but not all.
      
      For example, policies to allow inline JavaScript may require setting
      a "nonce" attribute on <script>. This attribute value must be unique
      and non-guessable. And, the value must be present in the HTTP header
      and the HTML body. This means that coordinating the value between
      Mercurial and another HTTP server could be difficult: it is much
      easier to generate and emit the nonce in a central location.
      
      This commit introduces support for emitting a
      Content-Security-Policy header from hgweb. A config option defines
      the header value. If present, the header is emitted. A special
      "%nonce%" syntax in the value triggers generation of a nonce and
      inclusion in <script> elements in templates. The inclusion of a
      nonce does not occur unless "%nonce%" is present. This makes this
      commit completely backwards compatible and the feature opt-in.
      
      The nonce is a type 4 UUID, which is the flavor that is randomly
      generated. It has 122 random bits, which should be plenty to satisfy
      the guarantees of a nonce.
      d7bf7d2b
    • Gregory Szorc's avatar
      hgweb: call process_dates() via DOM event listener · eb7de21b
      Gregory Szorc authored
      All the hgweb templates include mercurial.js in their header. All
      the hgweb templates have the same <script> boilerplate to run
      process_dates(). This patch factors that function call into
      mercurial.js as part of a DOMContentLoaded event listener.
      eb7de21b
  17. Dec 24, 2016
    • Gregory Szorc's avatar
      protocol: send application/mercurial-0.2 responses to capable clients · e75463e3
      Gregory Szorc authored
      With this commit, the HTTP transport now parses the X-HgProto-<N>
      header to determine what media type and compression engine to use for
      responses. So far, we only compress responses that are already being
      compressed with zlib today (stream response types to specific
      commands). We can expand things to cover additional response types
      later.
      
      The practical side-effect of this commit is that non-zlib compression
      engines will be used if both ends support them. This means if both
      ends have zstd support, zstd - not zlib - will be used to compress
      data!
      
      When cloning the mozilla-unified repository between a local HTTP
      server and client, the benefits of non-zlib compression are quite
      noticeable:
      
        engine     server CPU (s)   client CPU (s)    bundle size
      zlib (l=6)      174.1            283.2         1,148,547,026
      zstd (l=1)       99.2            267.3         1,127,513,841
      zstd (l=3)      103.1            266.9         1,018,861,363
      zstd (l=7)      128.3            269.7           919,190,278
      zstd (l=10)     162.0               -            894,547,179
      none             95.3            277.2         4,097,566,064
      
      The default zstd compression level is 3. So if you deploy zstd
      capable Mercurial to your clients and servers and CPU time on
      your server is dominated by "getbundle" requests (clients cloning
      and pulling) - and my experience at Mozilla tells me this is often
      the case - this commit could drastically reduce your server-side
      CPU usage *and* save on bandwidth costs!
      
      Another benefit of this change is that server operators can install
      *any* compression engine. While it isn't enabled by default, the
      "none" compression engine can now be used to disable wire protocol
      compression completely. Previously, commands like "getbundle" always
      zlib compressed output, adding considerable overhead to generating
      responses. If you are on a high speed network and your server is under
      high load, it might be advantageous to trade bandwidth for CPU.
      Although, zstd at level 1 doesn't use that much CPU, so I'm not
      convinced that disabling compression wholesale is worthwhile. And, my
      data seems to indicate a slow down on the client without compression.
      I suspect this is due to a lack of buffering resulting in an increase
      in socket read() calls and/or the fact we're transferring an extra 3 GB
      of data (parsing HTTP chunked transfer and processing extra TCP packets
      can add up). This is definitely worth investigating and optimizing. But
      since the "none" compressor isn't enabled by default, I'm inclined to
      punt on this issue.
      
      This commit introduces tons of tests. Some of these should arguably
      have been implemented on previous commits. But it was difficult to
      test without the server functionality in place.
      e75463e3
    • Gregory Szorc's avatar
      httppeer: advertise and support application/mercurial-0.2 · a520aefb
      Gregory Szorc authored
      Now that servers expose a capability indicating they support
      application/mercurial-0.2 and compression, clients can key off
      this to say they support responses that are compressed with
      various compression formats.
      
      After this commit, the HTTP wire protocol client now sends an
      "X-HgProto-<N>" request header indicating its support for
      "application/mercurial-0.2" media type and various compression
      formats.
      
      This commit also implements support for handling
      "application/mercurial-0.2" responses. It simply reads the header
      compression engine identifier then routes the remainder of the
      response to the appropriate decompressor.
      
      There were some test changes, but only to logging. That points to
      an obvious gap in our test coverage. This will be addressed in a
      subsequent commit once server support is in place (it is hard to
      test without server support).
      a520aefb
    • Gregory Szorc's avatar
      wireproto: advertise supported media types and compression formats · 35b516f8
      Gregory Szorc authored
      This commit introduces support for advertising a server's support for
      media types and compression formats in accordance with the spec defined
      in internals.wireproto.
      
      The bulk of the new code is a helper function in wireproto.py to
      obtain a prioritized list of compression engines available to the
      wire protocol. While not utilized yet, we implement support
      for obtaining the list of compression engines advertised by the
      client.
      
      The upcoming HTTP protocol enhancements are a bit lower-level than
      existing tests (most existing tests are command centric). So,
      this commit establishes a new test file that will be appropriate
      for holding tests around the functionality of the HTTP protocol
      itself.
      
      Rounding out this change, `hg debuginstall` now prints compression
      engines available to the server.
      35b516f8
    • Gregory Szorc's avatar
      util: declare wire protocol support of compression engines · 7283719e
      Gregory Szorc authored
      This patch implements a new compression engine API allowing
      compression engines to declare support for the wire protocol.
      
      Support is declared by returning a compression format string
      identifier that will be added to payloads to signal the compression
      type of data that follows and default integer priorities of the
      engine.
      
      Accessor methods have been added to the compression engine manager
      class to facilitate use.
      
      Note that the "none" and "bz2" engines declare wire protocol support
      but aren't enabled by default due to their priorities being 0. It
      is essentially free from a coding perspective to support these
      compression formats, so we do it in case anyone may derive use from
      it.
      7283719e
    • Gregory Szorc's avatar
      internals: document compression negotiation · 753b9d43
      Gregory Szorc authored
      As part of adding zstd support to all of the things, we'll need
      to teach the wire protocol to support non-zlib compression formats.
      
      This commit documents how we'll implement that.
      
      To understand how we arrived at this proposal, let's look at how
      things are done today.
      
      The wire protocol today doesn't have a unified format. Instead,
      there is a limited facility for differentiating replies as successful
      or not. And, each command essentially defines its own response format.
      
      A significant deficiency in the current protocol is the lack of
      payload framing over the SSH transport. In the HTTP transport,
      chunked transfer is used and the end of an HTTP response body (and
      the end of a Mercurial command response) can be identified by a 0
      length chunk. This is how HTTP chunked transfer works. But in the
      SSH transport, there is no such framing, at least for certain
      responses (notably the response to "getbundle" requests). Clients
      can't simply read until end of stream because the socket is
      persistent and reused for multiple requests. Clients need to know
      when they've encountered the end of a request but there is nothing
      simple for them to key off of to detect this. So what happens is
      the client must decode the payload (as opposed to being dumb and
      forwarding frames/packets). This means the payload itself needs
      to support identifying end of stream. In some cases (bundle2), it
      also means the payload can encode "error" or "interrupt" events
      telling the client to e.g. abort processing. The lack of framing
      on the SSH transport and the transfer of its responsibilities to
      e.g. bundle2 is a massive layering violation and a wart on the
      protocol architecture. It needs to be fixed someday by inventing a
      proper framing protocol.
      
      So about compression.
      
      The client transport abstractions have a "_callcompressable()"
      API. This API is called to invoke a remote command that will
      send a compressible response. The response is essentially a
      "streaming" response (no framing data at the Mercurial layer)
      that is fed into a decompressor.
      
      On the HTTP transport, the decompressor is zlib and only zlib.
      There is currently no mechanism for the client to specify an
      alternate compression format. And, clients don't advertise what
      compression formats they support or ask the server to send a
      specific compression format. Instead, it is assumed that non-error
      responses to "compressible" commands are zlib compressed.
      
      On the SSH transport, there is no compression at the Mercurial
      protocol layer. Instead, compression must be handled by SSH
      itself (e.g. `ssh -C`) or within the payload data (e.g. bundle
      compression).
      
      For the HTTP transport, adding new compression formats is pretty
      straightforward. Once you know what decompressor to use, you can
      stream data into the decompressor until you reach a 0 size HTTP
      chunk, at which point you are at end of stream.
      
      So our wire protocol changes for the HTTP transport are pretty
      straightforward: the client and server advertise what compression
      formats they support and an appropriate compression format is
      chosen. We introduce a new HTTP media type to hold compressed
      payloads. The header of the payload defines the compression format
      being used. Whoever is on the receiving end can sniff the first few
      bytes route to an appropriate decompressor.
      
      Support for multiple compression formats is advertised on both
      server and client. The server advertises a "compression" capability
      saying which compression formats it supports and in what order they
      are preferred. Clients advertise their support for multiple
      compression formats and media types via the introduced "X-HgProto"
      request header.
      
      Strictly speaking, servers don't need to advertise which compression
      formats they support. But doing so allows clients to fail fast if
      they don't support any of the formats the server does. This is useful
      in situations like sending bundles, where the client may have to
      perform expensive computation before sending data to the server.
      
      Rather than simply advertise a list of supported compression formats,
      we introduce an additional "httpmediatype" server capability
      advertising which media types the server supports. This means servers
      are explicit about what formats they exchange. IMO, this is superior
      to inferring support from other capabilities (like "compression").
      
      By advertising compression support on each request in the "X-HgProto"
      header and media type and direction at the server level, we are able
      to gradually transition existing commands/responses to the new media
      type and possibly compression. Contrast with the old world, where we
      only supported a single media type and the use of compression was
      built-in to the semantics of the command on both client and server.
      In the new world, if "application/mercurial-0.2" is supported,
      compression is supported. It's that simple.
      
      It's worth noting that we explicitly don't use "Accept,"
      "Accept-Encoding," "Content-Encoding," or "Transfer-Encoding" for
      content negotiation and compression. People knowledgeable of the HTTP
      specifications will say that we should use these because that's
      what they are designed to be used for. They have a point and I
      sympathize with the argument. Earlier versions of this commit even
      defined supported media types in the "Accept" header. However, my
      years of experience rolling out services leveraging HTTP has taught
      me to not trust the HTTP layer, especially if you are going outside
      the normal spec (such as using a custom "Content-Encoding" value to
      represent zstd streams). I've seen load balancers, proxies, and other
      network devices do very bad and unexpected things to HTTP messages
      (like insisting zlib compressed content is decoded and then re-encoded
      at a different compression level or even stripping compression
      completely). I've found that the best way to avoid surprises when
      writing protocols on top of HTTP is to use HTTP as a dumb transport as
      much as possible to minimize the chances that an "intelligent" agent
      between endpoints will muck with your data. While the widespread use of
      TLS is mitigating many intermediate network agents interfering with
      HTTP, there are still problems at the edges, with e.g. the origin HTTP
      server needing to convert HTTP to and from WSGI and buggy or
      feature-lacking HTTP client implementations. I've found the best way to
      avoid these problems is to avoid using headers like "Content-Encoding"
      and to bake as much logic as possible into media types and HTTP message
      bodies. The protocol changes in this commit do rely on a custom HTTP
      request header and the "Content-Type" headers. But we used them before,
      so we shouldn't be increasing our exposure to "bad" HTTP agents.
      
      For the SSH transport, we can't easily implement content negotiation
      to determine compression formats because the SSH transport has no
      content negotiation capabilities today. And without a framing protocol,
      we don't know how much data to feed into a decompressor. So in order
      to implement compression support on the SSH transport, we'd need to
      invent a mechanism to represent content types and an outer framing
      protocol to stream data robustly. While I'm fully capable of doing
      that, it is a lot of work and not something that should be undertaken
      lightly. My opinion is that if we're going to change the SSH transport
      protocol, we should take a long hard look at implementing a grand
      unified protocol that attempts to address all the deficiencies with
      the existing protocol. While I want this to happen, that would be
      massive scope bloat standing in the way of zstd support. So, I've
      decided to take the easy solution: the SSH transport will not gain
      support for multiple compression formats. Keep in mind it doesn't
      support *any* compression today. So essentially nothing is changing
      on the SSH front.
      753b9d43
Loading