Skip to content
Snippets Groups Projects
  1. May 17, 2019
  2. May 18, 2019
  3. May 19, 2019
  4. Apr 28, 2019
    • Pierre-Yves David's avatar
      benchmark: add an "identical" discovery benchmark · da84cf0adbf0
      Pierre-Yves David authored
      This one is dedicated to tracking the time taken for two identical repository to
      realise they are identical.
      
      We take the addition of a new test as an opportunity to give it a proper name to
      this test.
      da84cf0adbf0
    • Pierre-Yves David's avatar
      benchmark: migrate discovery benchmarks to roles · 631ac2098634
      Pierre-Yves David authored
      The existing discovery test are now using the new roles information. To keep the compatibility witht he older class structure. We are using the `benchmark_name` feature to keep the same name. We'll probably want to use the very same feature to rename all test later into something more sensible.
      
      However, as a side effect, this means we are dropping the "same" variant of the
      subset/super set test. So ASV will be confused by the params value change...
      <sigh>.
      
      Important note: for this discovery to work, benchmark now needs to be have an
      upgraded reference.
      631ac2098634
  5. Apr 27, 2019
  6. Mar 30, 2019
  7. Mar 29, 2019
  8. Mar 28, 2019
  9. Mar 27, 2019
  10. Mar 28, 2019
  11. Mar 05, 2019
    • Raphaël Gomès's avatar
      Parametrize the list of strip variants per repo. · c21323ce0592
      Raphaël Gomès authored
      Strip variants can differ between repos and should not be hardcoded.
      This change uses the data from the .benchrepo file for each repo to
      pass to each corresponding test.
      
      We are new storing the entire repo prefix instead of only storing
      the hash, since it is just as unique, removing the need to recompute
      the prefix from the hash later in the code.
      c21323ce0592
  12. Mar 04, 2019
  13. Mar 05, 2019
  14. Jan 24, 2019
    • Georges Racinet's avatar
      track_discovery: benchmark running perfdiscovery · 6444653212d3
      Georges Racinet authored
      This is using the same stripped clone as benchmarks for
      incoming/outgoing
      
      There are two different benchmark, one for a case where the source in a superset
      or the remote, one for case where the source in a subset of the remote. A third
      option where there are exclusive changeset on both side will be implemented
      later.
      
      Side note: perfdiscovery has been introduced in Mercurial
      public changeset db6cace18765
      
      The pure 'subset' case is interesting, because that's what
      a CI bot does all the time. It's more generally the point of
      interest for VCS-based distribution, such as sets of tools etc.
      
      (Later on, we shall introduce a 'mixed' case, where both repos have
      heads that their peer doens't know of.)
      6444653212d3
  15. Nov 23, 2018
  16. Nov 02, 2018
  17. Sep 06, 2018
  18. Aug 30, 2018
  19. Jun 15, 2018
  20. Jun 12, 2018
    • Philippe Pepiot's avatar
      Add a clone --stream benchmark · 21e8fb819e57
      Philippe Pepiot authored
      In a dedicated class since results cannot be correlated with clone benchmarks.
      Also unlike clone benchmarks, don't benchmark with a given revision since
      --stream will ignore it.
      21e8fb819e57
    • Philippe Pepiot's avatar
      Enable clone benchmarks for all reference repositories · 71963760ac6d
      Philippe Pepiot authored
      This is a bit tricky because there are several constraints:
      
      * we must cleanup after clone (in setup/teardown) to limit disk space usage
      * we must limit benchmark duration (avoid to run too much time on big repositories)
      * we must have more stable results for small repositories
      
      The whole algorithm is in asv.benchmark.benchmark_timing().
      
      To ensure setup/teardown is called before/after each clone we *must* have
      number = 1 (so timer.timeit() will be called with 1 as argument).
      Then it's important to have a proper sample_time set, because it will control
      how long the benchmark will run. timeit() is called `repeat` (default 10) times
      unless the whole benchmark (= one benchmark function with one set of parameter)
      takes more than `repeat` * 1.3 * `sample_time` where `sample_time` has a default value of 0.1
      
      This can produce very short times, for instance for a clone duration of 20s
      this will only run one clone even if repeat is set 10...
      
      So use sample_time = (max_time_we_want_the_benchmark_to_run  / (repeat * 1.3))
      to control maximum whole benchmark duration and to ensure timeit() will be
      called close to `repeat` times.
      71963760ac6d
    • Philippe Pepiot's avatar
      Drop useless param 'strip' from clone benchmarks · 38865651f487
      Philippe Pepiot authored
      'strip' param was only taking one value 'same'.
      Drop this param by inheriting from BaseTestSuite instead of
      BaseExchangeTimeSuite.
      38865651f487
    • Philippe Pepiot's avatar
      Handle params "repo_type" and "revset" Mixin class · 49aa5eec015c
      Philippe Pepiot authored
      In following changesets we will add exchanges benchmarks not using "strip" or
      "revset" params.
      
      Moving this outside of the BaseExchangeTimeSuite allow to re-use this logic
      without inheriting from it.
      49aa5eec015c
  21. May 17, 2018
    • Philippe Pepiot's avatar
      Use 'tip' instead of 'default' as revision variant · dbf6c2ca5ed8
      Philippe Pepiot authored
      This apply to files, archive and exchange benchmarks.
      
      Some reference repositories doesn't have a 'default' revision, or it relate to
      very old changesets that are not stripped by prepare_repos.py, so exchange
      benchmarks might be irrelevant.
      dbf6c2ca5ed8
  22. Aug 14, 2018
Loading