Skip to content
GitLab
  • Menu
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • heptapod heptapod
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 184
    • Issues 184
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 1
    • Merge requests 1
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages & Registries
    • Packages & Registries
    • Container Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • heptapod
  • heptapodheptapod
  • Issues
  • #120
Closed
Open
Created Nov 14, 2019 by Georges Racinet@gracinet🦑Owner

Fork and adapt GitLab Workhorse for Mercurial

Gitlab Workhorse is a reverse proxy sitting between Nginx and the Rails application.

Among its purposes is to deflect any Git HTTP requests that are potentially long (pull, push) directly to Git without going through the Rails application. Among its obvious performance and robustness benefits, this helps keeping the unicorn (Rails server) to a minimum, while Workhorse itself is designed to handle long timeouts gracefully. I suppose it's also able to stream requests and responses completely, something I doubt the Rails app would do easily.

As far as I know, it works by first calling an internal API on the Rails application for authentication and authorization, calling the inner Git serving process depending on the result.

We would need to have the same for Mercurial. Actually going through the Rails app has already been a problem in the past, because of weird handling of Content-Encoding: chunked when we switched from hg serve to gunicorn. Obviously, it would be much faster and easier on the RAM than what we're currently doing (reading the whole from Mercurial, I think that actually de-chunks IIRC).

That means probably to fork Workhorse. Perhaps we'd have to wait until we get back to the standard Omnibus build/deploy system. Or maybe we'll want that so much at that point that we'll be happy to build from heptapod-docker. We'll see.

To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking