Skip to content
GitLab
  • Menu
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • heptapod heptapod
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 184
    • Issues 184
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 1
    • Merge requests 1
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages & Registries
    • Packages & Registries
    • Container Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • heptapod
  • heptapodheptapod
  • Issues
  • #380
Closed
Open
Created Dec 05, 2020 by Georges Racinet@gracinet🦑Owner

Sharding strategy for RSpec tests

Even if we run only a small fraction of all existing RSpec tests (from upstream and our specific tests), it takes so much time it's become an impediment. Indeed, the rspec CI job takes between 35 and 50 minutes.

This is getting worse, as we keep on adding whole spec files to the pool of those we run (which is in the absolute a good thing).

To solve a similar problem at a much larger scale, upstream GitLab uses a sharding strategy using a dedicated tool called Knapsack, which maintains a knapsack_report.json file mapping each spec file to its reference time to run.

The whole GitLab setup stores the report file in S3 and keeps it up to date via nightly builds. We'll have eventually to do something similar, but we can start the way Knapsack documentation suggests; by committing the report file in the project.

One key aspect is that those RSpec tests can't run in parallel at all, hence Knapsack will be needed also on personal setups for the case where we want to run all tests (e.g during upstream GitLab version bumps). It even looks as if we'll have to do it from a row of HDK workspaces, because sockets and the tmp dir are pretty much hardcoded everywhere.

Current timings

All figures just taken on my development workstation (Ryzen 5 1600 with 6 cores + HT)

  • scripts/heptapod_rspec.sh: 28 minutes
  • same tests with 4 shards: 15 minutes

An interesting thing is that this works on a per file basis. Here are the 10 heaviest ones (taken from the report file):

 (40.40625882148743, 'spec/models/ci/job_artifact_spec.rb'),
 (56.95754432678223, 'spec/lib/gitlab/mercurial/hg_git_repository_spec.rb'),
 (67.25551176071167, 'spec/models/ci/pipeline_spec.rb'),
 (72.62744569778442, 'spec/lib/gitlab/git_access_spec.rb'),
 (73.08498215675354, 'spec/models/merge_request_diff_spec.rb'),
 (78.32610130310059, 'spec/lib/gitlab/mercurial/hgitaly_repository_spec.rb'),
 (85.08860659599304, 'spec/services/projects/create_service_spec.rb'),
 (85.65699505805969, 'spec/models/user_spec.rb'),
 (89.23750615119934, 'spec/models/repository_spec.rb'),
 (791.1557111740112, 'spec/lib/gitlab/hg_access_spec.rb')

Obviously, at over 13 minutes, hg_access_spec.rb is way too slow, but that shouldn't be too hard to fix. The sharding strategy gives us a strong incentive to do that because it doesn't matter how smart Knapsack can be, how many cores I have, a whole run can't be faster than 13 minutes.

a bit of perspective

The RSpec tests we currently use in Heptapod are but a small fraction of the GitLab "unit" RSpec tests. There are also "integration" and "system" tests, as well as functional/end-to-end tests ("qa"). The "unit" Rspec tests alone are run in GitLab CI with no less than 20 shards, each job taking between 20 and 25 minutes to complete.

The way Knapsack report files are updated in nightly builds works as following:

  • each job outputs its own JSON report and uploads it as an artifact. It looks like these are just the regular jobs, with an environment variable to trigger the reporting.
  • a dedicated update-test-metadata job aggregates all the results and sends the result to S3.
  • I don't know how all this takes the apparition of new test files into account.
Edited Dec 05, 2020 by Georges Racinet
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking