Skip to content

GitLab

  • Menu
Projects Groups Snippets
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • FOSS public instance FOSS public instance
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 51
    • Issues 51
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Container Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar

Mercurial Paris conference scheduled ! Main event : 2022-09-22. Call for papers deadline: 2022-06-17.

  • heptapod
  • FOSS public instanceFOSS public instance
  • Issues
  • #91

Closed
Open
Created Nov 30, 2020 by Georges Racinet@gracinet🦑Owner6 of 6 tasks completed6/6 tasks

CI: registry.heptapod.net / S3 cache for OSUOSL runners

Once a runner already got an image, it keeps it locally, like any repetitive use of docker pull would do, but that's not enough.

Our OSUOSL x86_64 runners all share the same link. That makes 8 machines that will download the images from registry.heptapod.net, to which the bandwidth isn't great. In particular, the bandwidth to our registry is a few times less than to docker.io, which makes totally sense geographically.

This is especially problematic for the Heptapod image, because it is really heavy (1GB, compressed) and produced automatically by CI (potentially more frequently). In the same time frame, we went from 4 OSUOSL runners to 8.

Caching alternative registries is notoriously problematic if one wants to cache all interactions with the registry. But the bulk of the traffic being taken directly from the S3 backend means that we should be able to cache only that.

The plan would be:

  • declare at least two runners to be also caching servers for their peers. We obviously don't want a SPOF.
  • make a private Certificate Authority (CA)
  • make all runners accept the private CA
  • create certificate for the S3 service signed by the private CA
  • setup local DNS servers to redirect S3 traffic to two machines (simple round-robin DNS works well) and forward all other zones to the datacenter-wide resolvers. I'm usually happy using unbound for that kind of purpose.
  • configure the two caching servers as bi-layered HA NGINX reverse proxies.
Edited Dec 04, 2020 by Georges Racinet
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking