Skip to content
Snippets Groups Projects
  1. Jan 29, 2025
  2. Dec 13, 2024
  3. Nov 14, 2024
    • Matt Harbison's avatar
      contrib: add a bat file to build all of the wheels on Windows · ea9cbb0fa3d3
      Matt Harbison authored
      This is duplicated from the current CI config, to be able to build releases
      consistently outside of CI.  I don't like the duplication, but I'm not worried
      about things changing too often, so I'm not bothering with PowerShell or some
      form that would allow execution by the CI runner.  We should consider putting
      the config in `pyproject.toml`, where things like what python versions to
      support can be centrally controlled for all platforms.  The output directory is
      different from CI here, but that's fine because it is intended to run this on a
      system that is *not* hosting the CI setup, and `dist/` is more standard.  I
      dropped the `win32` part of the output because that implies the 32-bit Intel
      architecture.
      
      Apparently, arm64 builds are supported back to Python 3.9, but support is still
      experimental (with py3.13)[1].  The CI system starts arm64 support with Python
      3.11, because that's the first version that an arm64 Python installer was
      available on Windows.  This doesn't second guess that decision.
      
      The required `msgfmt.exe` was installed manually[2], as it isn't currently
      handled by the dependency installation script.  Otherwise, this was successfully
      used with an activated venv based on Python 3.12.5, and only `cibuildwheel==2.21.3`
      installed.
      
      [1] https://cibuildwheel.pypa.io/en/stable/#what-does-it-do
      [2] https://github.com/mlocati/gettext-iconv-windows/releases/download/v0.22.5a-v1.17-r3/gettext0.22.5a-iconv1.17-shared-64.exe
      ea9cbb0fa3d3
  4. Apr 05, 2023
  5. Aug 30, 2022
  6. Feb 20, 2022
  7. Jan 18, 2022
  8. Dec 01, 2020
  9. Feb 02, 2021
  10. Dec 01, 2020
  11. Dec 14, 2020
  12. Nov 22, 2020
  13. Oct 01, 2020
    • Martin von Zweigbergk's avatar
      rust: move rustfmt.toml to repo root so it can be used by `hg fix` · 426294d06ddc
      Martin von Zweigbergk authored
      `hg fix` runs the formatters from the repo root so it doesn't pick up
      the `rustfmt.toml` configs we had in each the `hg-core`, `hg-cpython`,
      and `rhg` packages, which resulted in warnings about `async fn` not
      existing in Rust 2015. This patch moves the `rustfmt.toml` file to the
      root so `hg fix` will use it.
      
      By putting the `rustfmt.toml` file in a higher-level directory, it
      also applies to the `chg` and `hgcli` packages. That makes
      `test-check-rust-format.t` fail, so this patch also applies the new
      formatting rules to those packages.
      
      Differential Revision: https://phab.mercurial-scm.org/D9142
      426294d06ddc
  14. Apr 24, 2020
    • Gregory Szorc's avatar
      packaging: support building Inno installer with PyOxidizer · 94f4f2ec7dee
      Gregory Szorc authored
      We want to start distributing Mercurial on Python 3 on
      Windows. PyOxidizer will be our vehicle for achieving that.
      
      This commit implements basic support for producing Inno
      installers using PyOxidizer.
      
      While it is an eventual goal of PyOxidizer to produce
      installers, those features aren't yet implemented. So our
      strategy for producing Mercurial installers is similar to
      what we've been doing with py2exe: invoke a build system to
      produce files then stage those files into a directory so they
      can be turned into an installer.
      
      We had to make significant alterations to the pyoxidizer.bzl
      config file to get it to produce the files that we desire for
      a Windows install. This meant differentiating the build targets
      so we can target Windows specifically.
      
      We've added a new module to hgpackaging to deal with interacting
      with PyOxidizer. It is similar to pyexe: we invoke a build process
      then copy files to a staging directory. Ideally these extra
      files would be defined in pyoxidizer.bzl. But I don't think it
      is worth doing at this time, as PyOxidizer's config files are
      lacking some features to make this turnkey.
      
      The rest of the change is introducing a variant of the
      Inno installer code that invokes PyOxidizer instead of
      py2exe.
      
      Comparing the Python 2.7 based Inno installers with this
      one, the following changes were observed:
      
      * No lib/*.{pyd, dll} files
      * No Microsoft.VC90.CRT.manifest
      * No msvc{m,p,r}90.dll files
      * python27.dll replaced with python37.dll
      * Add vcruntime140.dll file
      
      The disappearance of the .pyd and .dll files is acceptable, as
      PyOxidizer has embedded these in hg.exe and loads them from
      memory.
      
      The disappearance of the *90* files is acceptable because those
      provide the Visual C++ 9 runtime, as required by Python 2.7.
      Similarly, the appearance of vcruntime140.dll is a requirement
      of Python 3.7.
      
      Differential Revision: https://phab.mercurial-scm.org/D8473
      94f4f2ec7dee
  15. Dec 06, 2019
    • Augie Fackler's avatar
      fuzz: use a more standard approach to allow local builds of fuzzers · 5a9e2ae9899b
      Augie Fackler authored
      This is taken from the (improved since we started fuzzing) guide on ideal
      integrations. Rather than have our own wonky targets for building outside the
      fuzzer universe, we have a driver program we carry along and use when we're
      not using LibFuzzer. This will let us jettison a fair amount of goo.
      
      contrib/fuzz/standalone_fuzz_target_runner.cc is
      https://github.com/google/oss-fuzz/ file
      projects/example/my-api-repo/standalone from git revision
      c4579d9358a73ea5dbcc99cb985de1f2bf76dcf7, reformatted with out
      clang-format settings and a no-check-code comment added. It allows
      running a single test input through a fuzzer, rather than performing
      ongoing fuzzing as libfuzzer would.
      
      contrib/fuzz/FuzzedDataProvider.h is
      https://github.com/llvm/llvm-project/ file
      /compiler-rt/include/fuzzer/FuzzedDataProvider.h from git revision
      a44ef027ebca1598892ea9b104d6189aeb3bc2f0, reformatted with our
      clang-format settings and a no-check-code comment added. We can
      discard this if we instead want to add an hghave check for a new
      enough llvm that includes FuzzedDataProvder.h in the fuzzer headers.
      
      Differential Revision: https://phab.mercurial-scm.org/D7564
      5a9e2ae9899b
  16. Nov 16, 2019
  17. Oct 24, 2019
    • Gregory Szorc's avatar
      packaging: consolidate CLI functionality into packaging.py · 081a77df7bc6
      Gregory Szorc authored
      Consolidating functionality for invoking code in the hgpackaging
      package through a single CLI entry point will make things simpler
      when we add more complexity to that package. For example, it will
      allow us to run things out of a virtualenv with third party
      packages.
      
      This commit consolidates functionality from the Inno and WiX
      build.py scripts into a new packaging.py script. That script
      simply creates a virtualenv and runs the CLI functionality in
      it.
      
      The new virtualenv is populated with jinja2 because I felt
      it easier to incorporate requirements file processing in this
      commit and we will soon use jinja2 in an upcoming commit.
      
      The unified CLI functionality will also make it easier to
      script other packaging workflows going forward. e.g. RPM, Debian,
      and macOS packaging.
      
      Differential Revision: https://phab.mercurial-scm.org/D7156
      081a77df7bc6
  18. Oct 29, 2019
  19. Oct 14, 2019
  20. Oct 06, 2019
  21. Oct 05, 2019
  22. Sep 06, 2019
    • Gregory Szorc's avatar
      automation: implement "publish-windows-artifacts" command · 92593d72e10b
      Gregory Szorc authored
      The new command and associated functionality can be used to
      automate the publishing of Windows release artifacts. It
      supports uploading wheels to PyPI (using twine) and copying
      the artifacts to mercurial-scm.org and updating the latest.dat
      file to advertise them via the website.
      
      I ran `automation.py publish-windows-artifacts 5.1.1` and it
      appeared to "just work." But the real test will be to do this
      on the next release...
      
      Differential Revision: https://phab.mercurial-scm.org/D6786
      92593d72e10b
  23. Apr 27, 2019
    • Gregory Szorc's avatar
      automation: initial support for running Linux tests · 65b3ef162b39
      Gregory Szorc authored
      Building on top of our Windows automation support, this commit
      implements support for performing automated tasks on remote Linux
      machines. Specifically, we implement support for running tests
      on ephemeral EC2 instances. This seems to be a worthwhile place
      to start, as building packages on Linux is more or less a solved
      problem because we already have facilities for building in Docker
      containers, which provide "good enough" reproducibility guarantees.
      
      The new `run-tests-linux` command works similarly to
      `run-tests-windows`: it ensures an AMI with hg dependencies is
      available, provisions a temporary EC2 instance with this AMI, pushes
      local changes to that instance via SSH, then invokes `run-tests.py`.
      
      Using this new command, I am able to run the entire test harness
      substantially faster then I am on my local machine courtesy of
      access to massive core EC2 instances:
      
      wall: 16:20 ./run-tests.py -l (i7-6700K)
      wall: 14:00 automation.py run-tests-linux --ec2-instance c5.2xlarge
      wall:  8:30 automation.py run-tests-linux --ec2-instance m5.4xlarge
      wall:  8:04 automation.py run-tests-linux --ec2-instance c5.4xlarge
      wall:  4:30 automation.py run-tests-linux --ec2-instance c5.9xlarge
      wall:  3:57 automation.py run-tests-linux --ec2-instance m5.12xlarge
      wall:  3:05 automation.py run-tests-linux --ec2-instance m5.24xlarge
      wall:  3:02 automation.py run-tests-linux --ec2-instance c5.18xlarge
      
      ~3 minute wall time to run pretty much the entire test harness is
      not too bad!
      
      The AMIs install multiple versions of Python. And the run-tests-linux
      command specifies which one to use:
      
      automation.py run-tests-linux --python system3
      automation.py run-tests-linux --python 3.5
      automation.py run-tests-linux --python pypy2.7
      
      By default, the system Python 2.7 is used. Using this functionality,
      I was able to identity some unexpected test failures on PyPy!
      
      Included in the feature is support for running with alternate
      filesystems. You can simply pass --filesystem to the command to
      specify the type of filesystem to run tests on. When the ephemeral
      instance is started, a new filesystem will be created and tests
      will run from it:
      
      wall:  4:30 automation.py run-tests-linux --ec2-instance c5.9xlarge
      wall:  4:20 automation.py run-tests-linux --ec2-instance c5d.9xlarge --filesystem xfs
      wall:  4:24 automation.py run-tests-linux --ec2-instance c5d.9xlarge --filesystem tmpfs
      wall:  4:26 automation.py run-tests-linux --ec2-instance c5d.9xlarge --filesystem ext4
      
      We also support multiple Linux distributions:
      
      $ automation.py run-tests-linux --distro debian9
      total time: 298.1s; setup: 60.7s; tests: 237.5s; setup overhead: 20.4%
      
      $ automation.py run-tests-linux --distro ubuntu18.04
      total time: 286.1s; setup: 61.3s; tests: 224.7s; setup overhead: 21.4%
      
      $ automation.py run-tests-linux --distro ubuntu18.10
      total time: 278.5s; setup: 58.2s; tests: 220.3s; setup overhead: 20.9%
      
      $ automation.py run-tests-linux --distro ubuntu19.04
      total time: 265.8s; setup: 42.5s; tests: 223.3s; setup overhead: 16.0%
      
      Debian and Ubuntu are supported because those are what I use and am
      most familiar with. It should be easy enough to add support for other
      distros.
      
      Unlike the Windows AMIs, Linux EC2 instances bill per second. So
      the cost to instantiating an ephemeral instance isn't as severe.
      That being said, there is some overhead, as it takes several dozen
      seconds for the instance to boot, push local changes, and build
      Mercurial. During this time, the instance is largely CPU idle and
      wasting money. Even with this inefficiency, running tests is
      relatively cheap: $0.15-$0.25 per full test run. A machine running
      tests as efficiently as these EC2 instances would cost say $6,000, so
      you can run the test harness a >20,000 times for the cost of an
      equivalent machine. Running tests in EC2 is almost certainly cheaper
      than buying a beefy machine for developers to use :)
      
      # no-check-commit because foo_bar function names
      
      Differential Revision: https://phab.mercurial-scm.org/D6319
      65b3ef162b39
  24. Mar 15, 2019
    • Gregory Szorc's avatar
      automation: perform tasks on remote machines · b05a3e28cf24
      Gregory Szorc authored
      Sometimes you don't have access to a machine in order to
      do something. For example, you may not have access to a Windows
      machine required to build Windows binaries or run tests on that
      platform.
      
      This commit introduces a pile of code intended to help
      "automate" common tasks, like building release artifacts.
      
      In its current form, the automation code provides functionality
      for performing tasks on Windows EC2 instances.
      
      The hgautomation.aws module provides functionality for integrating
      with AWS. It manages EC2 resources such as IAM roles, EC2
      security groups, AMIs, and instances.
      
      The hgautomation.windows module provides a higher-level
      interface for performing tasks on remote Windows machines.
      
      The hgautomation.cli module provides a command-line interface to
      these higher-level primitives.
      
      I attempted to structure Windows remote machine interaction
      around Windows Remoting / PowerShell. This is kinda/sorta like
      SSH + shell, but for Windows. In theory, most of the functionality
      is cloud provider agnostic, as we should be able to use any
      established WinRM connection to interact with a remote. In
      reality, we're tightly coupled to AWS at the moment because
      I didn't want to prematurely add abstractions for a 2nd cloud
      provider. (1 was hard enough to implement.)
      
      In the aws module is code for creating an image with a fully
      functional Mercurial development environment. It contains VC9,
      VC2017, msys, and other dependencies. The image is fully capable
      of building all the existing Mercurial release artifacts and
      running tests.
      
      There are a few things that don't work. For example, running
      Windows tests with Python 3. But building the Windows release
      artifacts does work. And that was an impetus for this work.
      (Although we don't yet support code signing.)
      
      Getting this functionality to work was extremely time consuming.
      It took hours debugging permissions failures and other wonky
      behavior due to PowerShell Remoting. (The permissions model for
      PowerShell is crazy and you brush up against all kinds of
      issues because of the user/privileges of the user running
      the PowerShell and the permissions of the PowerShell session
      itself.)
      
      The functionality around AWS resource management could use some
      improving. In theory we support shared tenancy via resource
      name prefixing. In reality, we don't offer a way to configure
      this.
      
      Speaking of AWS resource management, I thought about using a tool
      like Terraform to manage resources. But at our scale, writing a
      few dozen lines of code to manage resources seemed acceptable.
      Maybe we should reconsider this if things grow out of control.
      Time will tell.
      
      Currently, emphasis is placed on Windows. But I only started
      there because it was likely to be the most difficult to implement.
      It should be relatively trivial to automate tasks on remote Linux
      machines. In fact, I have a ~1 year old script to run tests on a
      remote EC2 instance. I will likely be porting that to this new
      "framework" in the near future.
      
      # no-check-commit because foo_bar functions
      
      Differential Revision: https://phab.mercurial-scm.org/D6142
      b05a3e28cf24
  25. Mar 08, 2019
    • Gregory Szorc's avatar
      wix: functionality to automate building WiX installers · 4371f543efda
      Gregory Szorc authored
      Like we did for Inno Setup, we want to make it easier to
      produce WiX installers. This commit does that.
      
      We introduce a new hgpackaging.wix module for performing
      all the high-level tasks required to produce WiX installers.
      This required miscellaneous enhancements to existing code in
      hgpackaging, including support for signing binaries.
      
      A new build.py script for calling into the module APIs has been
      created. It behaves very similarly to the Inno Setup build.py
      script.
      
      Unlike Inno Setup, we didn't have code in the repo previously
      to generate WiX installers. It appears that all existing
      automation for building WiX installers lives in the
      https://bitbucket.org/tortoisehg/thg-winbuild repository - most
      notably in its setup.py file. My strategy for inventing the
      code in this commit was to step through the code in that repo's
      setup.py and observe what it was doing. Despite the length of
      setup.py in that repository, the actual amount of steps required
      to produce a WiX installer is actually quite low. It consists
      of a basic py2exe build plus invocations of candle.exe and
      light.exe to produce the MSI.
      
      One rabbit hole that gave me fits was locating the Visual Studio
      9 C Runtime merge modules. These merge modules are only present
      on your system if you have a full Visual Studio 2008 installation.
      Fortunately, I have a copy of Visual Studio 2008 and was able
      to install all the required updates. I then uploaded these merge
      modules to a personal repository on GitHub. That is where the
      added code references them from. We probably don't need to
      ship the merge modules. But that is for another day.
      
      The installs from the MSIs produced with the new automation
      differ from the last official MSI in the following ways:
      
      * Our HTML manual pages have UNIX line endings instead of Windows.
      * We ship modules in the mercurial.pure package. It appears the
        upstream packaging code is not including this package due to
        omission (they supply an explicit list of packages that has
        drifted out of sync with our setup.py).
      * We do not ship various distutils.* modules. This is because
        virtualenvs have a custom distutils/__init__.py that automagically
        imports distutils from its original location and py2exe gets
        confused by this. We don't use distutils in core Mercurial and
        don't provide a usable python.exe, so this omission should be
        acceptable.
      * The version of the enum package is different and we ship
        an enum.pyc instead of an enum/__init__.py.
      * The version of the docutils package is different and we
        ship a different set of files.
      * The version of Sphinx is drastically newer and we ship a
        number of files the old version did not. (I'm not sure why
        we ship Sphinx - I think it is a side-effect of the way the
        THG code was installing dependencies.)
      * We ship the idna package (dependent of requests which is a
        dependency of newer versions of Sphinx).
      * The version of imagesize is different and we ship an
        imagesize.pyc instead of an imagesize/__init__.pyc.
      * The version of the jinja2 package is different and the sets
        of files differs.
      * We ship the packaging package, which is a dependency for Sphinx.
      * The version of the pygments package is different and the sets
        of files differs.
      * We ship the requests package, which is a dependency for Sphinx.
      * We ship the snowballstemmer package, which is a dependency for
        Sphinx.
      * We ship the urllib3 package, which is a dependency for requests,
        which is a dependency for Sphinx.
      * We ship a newer version of the futures package, which includes a
        handful of extra modules that match Python 3 module names.
      
      # no-check-commit because foo_bar naming
      
      Differential Revision: https://phab.mercurial-scm.org/D6097
      4371f543efda
  26. Mar 07, 2019
    • Gregory Szorc's avatar
      packaging: extract py2exe functionality to own module · a2e191a937a9
      Gregory Szorc authored
      py2exe builds are shared between Inno Setup and WIX. We'll
      want the logic for performing py2exe builds to be reusable
      across the code for both installers.
      
      This commit extracts the py2exe-specific functionality into
      its own module.
      
      There's definitely room to customize things further. This will
      be done in future commits, as necessary. (I'm not even sure what
      customizations WIX will require yet. Presumably a lot.)
      
      Differential Revision: https://phab.mercurial-scm.org/D6091
      a2e191a937a9
    • Gregory Szorc's avatar
      packaging: move Inno Setup core logic into a module · dc7827a9ba64
      Gregory Szorc authored
      Aspects of building the Inno Setup and WIX installers are shared.
      It will make sense for them to share code.
      
      Plus, having code in a reusable library (as opposed to a standalone
      script) is just a better approach.
      
      This commit moves the core logic to build the Inno Setup installer
      into the hgpackaging package. inno/build.py is now a simple frontend
      script that calls into a module to do the bulk of the work.
      
      As part of this change, I also found a typo in build() where it was
      referencing "iscc" instead of "iscc_exe." Because "iscc" was in
      the global scope via the only caller, things just happened to work
      before. Another benefit of always using functions and not putting
      global code for __main__ in the same file as library code.
      
      Differential Revision: https://phab.mercurial-scm.org/D6087
      dc7827a9ba64
    • Gregory Szorc's avatar
      packaging: split downloading code into own module · c2237fe1359e
      Gregory Szorc authored
      As we will introduce more code to support packaging, it will be
      useful to have download code in its own module.
      
      Differential Revision: https://phab.mercurial-scm.org/D6084
      c2237fe1359e
    • Gregory Szorc's avatar
      packaging: establish hgpackaging package · 9da97f49d4f4
      Gregory Szorc authored
      Previously, contrib/packaging behaved as a root to a
      package directory and we had a "packagingutil" module. As I
      work more on packaging code, we'll want to have more code
      shared between different packaging tools. I think it makes
      sense to have a single package containing multiple modules
      than multiple top-level modules.
      
      This commit establishes an "hgpackaging" package by moving
      the existing packagingutil code to it.
      
      Differential Revision: https://phab.mercurial-scm.org/D6083
      9da97f49d4f4
  27. Mar 04, 2019
    • Gregory Szorc's avatar
      inno: script to automate building Inno installer · d7dc4ac1ff84
      Gregory Szorc authored
      The official Inno installer build process is poorly documented.
      And attempting to reproduce behavior of the installer uploaded
      to www.mercurial-scm.org has revealed a number of unexpected
      behaviors.
      
      This commit attempts to improve the state of reproducibility
      of the Inno installer by introducing a Python script to
      largely automate the building of the installer.
      
      The new script (which must be run from an environment with the
      Visual C++ environment configured) takes care of producing an
      Inno installer. When run from a fresh Mercurial source checkout
      with all the proper system dependencies (the VC++ toolchain,
      Windows 10 SDK, and Inno tools) installed, it "just works."
      The script takes care of downloading all the Python
      dependencies in a secure manner and manages the build
      environment for you. You don't need any additional config
      files: just launch the script, pointing it at an existing
      Python and ISCC binary and it takes care of the rest.
      
      The produced installer creates a Mercurial installation with
      a handful of differences from the existing 4.9 installers
      (produced by someone else):
      
      * add_path.exe is missing (this was removed a few changesets ago)
      * The set of api-ms-win-core-* DLLs is different (I suspect this
        is due to me using a different UCRT / Windows version).
      * kernelbase.dll and msasn1.dll are missing.
      * There are a different set of .pyc files for dulwich,
        keyring, and pygments due to us using the latest versions of
        each.
      * We include Tcl/Tk DLLs and .pyc files (I'm not sure why these
        are missing from the existing installers).
      * We include the urllib3 and win32ctypes packages (which are
        dependencies of dulwich and pywin32, respectively). I'm not
        sure why these aren't present in the existing installers.
      * We include a different set of files for the distutils package.
        I'm not sure why. But it should be harmless.
      * We include the docutils package (it is getting picked up as
        a dependency somehow). I think this is fine.
      * We include a copy of argparse.pyc. I'm not sure why this was
        missing from existing installers.
      * We don't have a copy of sqlite3/dump.pyc. I'm not sure why. The
        SQLite C extension code only imports this module when
        conn.iterdump() is called. It should be safe to omit.
      * We include files in the email.test and test packages. The set of
        files is small and their presence should be harmless.
      
      The new script and support code is written in Python 3 because
      it is brand new and independent code and I don't believe new
      Python projects should be using Python 2 in 2019 if they have
      a choice about it.
      
      The readme.txt file has been renamed to readme.rst and overhauled
      to reflect the existence of build.py.
      
      Differential Revision: https://phab.mercurial-scm.org/D6066
      d7dc4ac1ff84
  28. Feb 04, 2019
  29. Oct 12, 2018
  30. Aug 10, 2018
  31. Apr 20, 2018
  32. Feb 26, 2018
    • Augie Fackler's avatar
      http: drop custom http client logic · 23d12524a202
      Augie Fackler authored
      Eight and a half years ago, as my starter bug on code.google.com, I
      investigated a mysterious "broken pipe" error from seemingly random
      clients[0]. That investigation revealed a tragic story: the Python
      standard library's httplib was (and remains) barely functional. During
      large POSTs, if a server responds early with an error (even a
      permission denied error!) the client only notices that the server
      closed the connection and everything breaks. Such server behavior is
      implicitly legal under RFC 2616 (the latest HTTP RFC as of when I was
      last working on this), and my understanding is that later RFCs have
      made it explicitly legal to respond early with any status code outside
      the 2xx range.
      
      I embarked, probably foolishly, on a journey to write a new http
      library with better overall behavior. The http library appears to work
      well in most cases, but it can get confused in the presence of
      proxies, and it depends on select(2) which limits its utility if a lot
      of file descriptors are open. I haven't touched the http library in
      almost two years, and in the interim the Python community has
      discovered a better way[1] of writing network code. In theory some day
      urllib3 will have its own home-grown http library built on h11[2], or
      we could do that. Either way, it's time to declare our current
      confusingly-named "http2" client logic and move on. I do hope to
      revisit this some day: it's still garbage that we can't even respond
      with a 401 or 403 without reading the entire POST body from the
      client, but the goalposts on writing a new http client library have
      moved substantially. We're almost certainly better off just switching
      to requests and eventually picking up their http fixes than trying to
      live with something that realistically only we'll ever use. Another
      approach would be to write an adapter so that Mercurial can use pycurl
      if it's installed. Neither of those approaches seem like they should
      be investigated prior to a release of Mercurial that works on Python
      3: that's where the mindshare is going to be for any improvements to
      the state of the http client art.
      
      0: http://web.archive.org/web/20130501031801/http://code.google.com/p/support/issues/detail?id=2716
      1: http://sans-io.readthedocs.io/
      2: https://github.com/njsmith/h11
      
      Differential Revision: https://phab.mercurial-scm.org/D2444
      23d12524a202
  33. Nov 30, 2017
Loading