High-level mercurial performance testing
Setup with Docker
With docker, you only need to launch this command to get the results: docker-compose up --build
The runner will start, test a bunch of commits and you will see results at http://localhost:8080
Setup manually
You can follow the Dockerfile content which is basically:
- Install the dependencies with
pip install -r requirements.txt
- Prepare the repos, asv specifics file and launch the workers with:
./launch.py
- Get the webserver running for showing the results with:
asv preview
For pull/push benchmark with ssh you have to setup a ssh-key without passphrase (or rely on ssh-agent).
mkdir ~/.ssh
ssh-keygen -P '' -f ~/.ssh/bighgperf
cat ~/.ssh/bighgperf.pub >> ~/.ssh/authorized_keys
ssh-keyscan -H localhost >> ~/.ssh/known_hosts
How to Setup
Make sure you have the environment setup
$ virtualenv some/dir/venv-asv $ some/dir/venv-asv/bin/pip install -r requirements.txt
Do not forget to enable the virtual env before running the benchmark
$ source some/dir/venv-asv
Using a non-default set of repositories
Instead of using the repository defined in default.repos
, one can create a
local.repos
. The format is the same, and the setup will automatically pick it
up if it exists.
Downloading repository from a different location
You can directly call the scrip/setup-repos file to download from an alternative location. For example:
$ ./script/setup-repos default.repos repos/ sftp://data.local//srv/datadir/
Running benchmarks
How to run all benchmarks on the default revisions
$ ./launch.py
How to run all benchmarks on specific revisions
$ ./launch.py "0b63a6743010+33ac6a72308a"
(any revset are supported)
How to run some specific benchmark
use the '--bench' option to filter the test to run using a regular expression
Running one benchmark only
$ ./launch.py "0b63a6743010+33ac6a72308a" --bench track_commit
Running one pypy variant only
$ ./launch.py "0b63a6743010+33ac6a72308a" --bench "netbeans-2018"
Running one test with one variant
$ ./launch.py "0b63a6743010+33ac6a72308a" --bench "track_commit.*mozilla-central"
Comparing results
$ asv compare -s oldhash newhash
Checking where a run spent time
$ ./asv_result_time.py results//.json