Initial implementation
Included:
- Create a tarball from a given "dataset" directory (such as a Mercurial repository) and upload it with rsync
- Download tarballs via HTTPS, check their SHA-256 hash, keep them in cache
- Extract tarballs into temporary directories, run a shell script, and report the time that takes
Planned:
- Compile a configurable changeset of Mercurial to run in benchmarks
- Cache installs of compiled Mercurial
- Choose a Mercurial "variant" (it's called a module policy) among py, py+c, py+c+rust, rhg, etc.
- Take benchmarking ideas from hyperfine
- Run the same benchmarks multiple times, measure average duration and standard deviation
- Subtract the time to spawn /bin/sh and run an empty shell script
- Add a "warm-up" run
- Auto-adaptive number of runs?
- Detect statistical outliers?
- Save results to files, together with environment info
- Compare pairs of result sets (report for each dataset ±X% faster and ±N milliseconds faster)
Edited by Raphaël Gomès