- Feb 26, 2019
-
-
Boris Feld authored
This was it will be easier to identify which variants is defined by the repo and which ones are defined by the test itself.
-
- Feb 13, 2019
-
-
Raphael Gomes authored
Some of the performance methods in Mercurial are not defined at all points in the history. There might be times when they were not yet defined or broken, etc.. This decorator adds a way of telling asv that this method should be skipped for the hash being tested.
-
- Mar 04, 2019
-
-
Raphaël Gomès authored
We currently are limited in the way that tests can be run against the test data, which has to reside in the same folder. For reasons of flexibility and/or performance, one might need to change the filesystem path to the repositories being tested. This change introduces a new config variable "repodir" which must contain a valid path. While this path is ran through `os.path.abspath`, it is recommended to use an absolute path, since the working directory in this context is not garanteed to be stable.
-
- Feb 12, 2019
-
-
Raphael Gomes authored
-
- Jan 25, 2019
-
-
Boris Feld authored
-
- Nov 23, 2018
-
-
Pierre-Yves David authored
This will give benchmark access repository specific variables.
-
Pierre-Yves David authored
Now that we have yaml, the sky is the limit.
-
- Nov 07, 2018
-
-
Pierre-Yves David authored
That repository might constains obsmarkers. Enabling evolve prevent an annoying warning.
-
Pierre-Yves David authored
Some commands of the `perf` extensions returns multiple value. Running the command multiple time would be quite slow. We add a way to reuse data from the same run from multiple `track_xxx` method.
-
Pierre-Yves David authored
This makes it possible to handle `perf` command returning multiple value.
-
Pierre-Yves David authored
We split the part responsible for running the actual command from the part responsible for processing it. This will be useful to introduce a new method relying on json output instead of parsing textual output.
-
- Aug 30, 2018
-
-
Pierre-Yves David authored
We use the new options offered by upstream ASV. We use a 3 repeat minimum and 10 repeat max. Max time (before we give up on max repeat) is 60 seconds per variants.
-
- Jul 06, 2018
-
-
Philippe Pepiot authored
-
- Aug 13, 2018
-
-
Martijn Pieters authored
-
Martijn Pieters authored
-
Pierre-Yves David authored
This is overall bad for our number, however, the current setup poorly exploit them and create issue, we disable them until this get fixed.
-
Pierre-Yves David authored
This will help to update the code in a later changeset.
-
- Aug 02, 2018
-
-
Pierre-Yves David authored
One more step toward the new world.
-
- Aug 01, 2018
-
-
Martijn Pieters authored
Usage: ``` class TestSuite(BaseTestSuite): params = BaseTestSuite.params + [foo_values, bar_values] param_names = BaseTestSuite.param_names + ['foo', 'bar'] @params_as_kwargs def test_name(self, foo, bar, **kwargs): # ... and you no longer have to worry about other parameters in BaseTestSuite.params.
-
Pierre-Yves David authored
This check the overall run process from scratch works fine
-
Boris Feld authored
Hardcode variants and revset for the moment
-
- May 17, 2018
-
-
Philippe Pepiot authored
So benchmarks/* is flake8 compliant
-
Philippe Pepiot authored
-
Philippe Pepiot authored
This will be used to run perf.py based track benchmark on classes inheriting from BaseTestSuite.
-
Philippe Pepiot authored
This will be used for commands that may not exists in earliers mercurial versions. Move this logic to BaseTestSuite since we will use it when switching perf.py based benchmarks to inherit from BaseTestSuite in next changesets
-
Philippe Pepiot authored
All methods on these base classes are now unused, so simply inherit from BaseTestSuite. _timeit() was unused for a while, when we switched from asv track benchmarks to time benchmarks.
-
Philippe Pepiot authored
-
Philippe Pepiot authored
This is a wrapper to check_output that run currently benchmarked hg (self.hgpath). This also use the repository in self.repo_path by default but others can be used (especially useful for exchange benchmarks). The goal is to use this wrapper everywere instead of specific wrappers (_execute, _single_execute, subprocess.check_output) in next changesets.
-
Philippe Pepiot authored
This is used to run given command with a controlled environment and handle generic behavior regarding expected return code. Use it in _single_execute() for Time and Track base classes.
-
Philippe Pepiot authored
-
Philippe Pepiot authored
It's never used without default value (True).
-
- Apr 26, 2018
-
-
Philippe Pepiot authored
hgweb is broken on some revisions of mercurial (see https://bz.mercurial-scm.org/show_bug.cgi?id=5851). Skip http benchmark on such revisions to avoid long timeouts and hgweb process leak. Add a 'skip' key to config.yaml with known broken revisions for given feature, compile all revisions in repos/skip.json and use it in the benchmark code to decide of current revision should be skipped or not. Use asv way to skip tests in setup() and raise NotImplementedError in such cases. $ asv run --bench time_push --show-stderr --no-pull 4.6rc0 ================ =========== =================== ============ ============ -- revset ------------------------------------------------ ------------------------- repo repo_type strip None default ================ =========== =================== ============ ============ mercurial-2017 local same 98.9±1ms 102±1ms mercurial-2017 local last(all(), 10) 127±1ms 139±10ms mercurial-2017 local last(all(), 100) 190±3ms 212±2ms mercurial-2017 local last(all(), 1000) 823±20ms 839±7ms mercurial-2017 ssh same 325±7ms 333±10ms mercurial-2017 ssh last(all(), 10) 405±30ms 383±9ms mercurial-2017 ssh last(all(), 100) 468±30ms 495±20ms mercurial-2017 ssh last(all(), 1000) 1.27±0.05s 1.26±0.02s mercurial-2017 http same n/a n/a mercurial-2017 http last(all(), 10) n/a n/a mercurial-2017 http last(all(), 100) n/a n/a mercurial-2017 http last(all(), 1000) n/a n/a ================ =========== =================== ============ ============
-
- Apr 12, 2018
-
-
Philippe Pepiot authored
Since we are using the perf.py from the benchmarked mercurial, some commands may not be availables. When hg fail with exit code 255, use 'hg help' to determine the existence of the command. Skip gracefully when command does not exist by returning NaN which is the equivalent of "skipped" for asv (same behavior as raising NotImplementedError in setup()). Output for "asv run --bench perfphases --show-stderr --no-pull f85de28eae32e7d3": ====================== ===== repo ---------------------- ----- pypy-2017 n/a netbeans-2017 n/a mozilla-central-2017 n/a mercurial-2017 n/a ====================== =====
-
- Apr 23, 2018
-
-
Boris Feld authored
-
- Apr 12, 2018
-
-
Philippe Pepiot authored
When running "asv dev" there is no virtualenv managed by asv, so use the local installation in ./mercurial
-
- Mar 22, 2018
-
-
Philippe Pepiot authored
This work by adding a 'type' param which can be local or ssh. For ssh, use '--remotecmd /path/to/benchmarked/hg' and 'ssh://localhost//path/to/repo', this assume the user running asv can ssh to localhost without password by using a key without passphrase or with ssh-agent. To have ssh-agent working we must allow passing SSH_AUTH_SOCK through subprocesses.
-
- Jan 31, 2018
-
-
Philippe Pepiot authored
In ASV setup is run for each repeat but benchmark function is run `number` of time within the same repetition (calling it a "sample"). So for push / pull benchmarks, number should be set to 1 (call setup before each call of the benchmark method), also setup isn't called during "warmup" time, so disable it. Set repeat to 20 (instead of the default 10) this should be enough to have stable results while keeping benchmark time reasonable. Also move benchmark params (number, timer) to class variable.
-
- Jan 19, 2018
-
-
Philippe Pepiot authored
expect a return code of 1 for incoming, outgoing and push in this case.
-
- Jan 10, 2018
-
-
Boris Feld authored
-
- Dec 14, 2017
-
-
Philippe Pepiot authored
If the first measurement is > 30, the function returned median([]) which return 0.0. Include the first measurement in timings to avoid such case.
-