1. 17 Jan, 2023 1 commit
    • Georges Racinet's avatar
      Merged upstream v13.11.0 in Heptapod Runner · 0d2af47cdaac
      Georges Racinet authored
      The x.y.0 tags seem to always be on the main branch, meaning
      that we can merge them directly.
      
      Conflicts resolution:
      
      - `docker_command_test.go` was entirely renamed upstream as
        `docker_command_integration_test.go`
      - `Makefile` and `ci/version` have upstream changes relevant to
        Git only.
      - minor wording in `helpers/container/helperimage/info.go`
      
      Adaptations:
      
      -  `executors/docker/internal/volumes/manager_integration_test.go`:
        introduced in 13.11, with using commit ids and refs in the testing
        repo that are outdated. Had to change them to the current ones
        (compare, e.g., v15.7.0)
      0d2af47cdaac
  2. 14 Jan, 2023 10 commits
  3. 13 Jan, 2023 11 commits
    • Georges Racinet's avatar
      typos in doc and log · 0a1b1117f7fd
      Georges Racinet authored
      0a1b1117f7fd
    • Georges Racinet's avatar
      Heptapod PAAS Runner configuration: re-fixed inconsistency · d1673bf0c4ca
      Georges Racinet authored
      This was intended to be the content of 05a640bd76ae but I
      have obviously messed up and it ended up missing.
      d1673bf0c4ca
    • Georges Racinet's avatar
      Next version will be 0.6.0rc7 · aaa7a0d205df
      Georges Racinet authored
      aaa7a0d205df
    • Georges Racinet's avatar
      16df87d69eac
    • Georges Racinet's avatar
      Heptapod PAAS Runner: updated version for release · dda35d92e354
      Georges Racinet authored
      Not sure a corresponding Heptapod Runner rc6 will be made,
      we'll see.
      dda35d92e354
    • Georges Racinet's avatar
      Clever Runner: wait for inner CC Git repo to be ready · 58acf87737f4
      Georges Racinet authored
      Closes #39
      
      Up to now, we didn't have to do it because the Git repo creation
      is actually so fast that we thought it to be synchronous. This
      was exposed during a scheduled maintenance on Clever Git deployment
      repos.
      
      The timeout and wait steps can be tweaked, but it shouldn't be necessary.
      Also, this is a non interruptible wait (we would have to introduce
      a serializable information to make it interruptible), but the timeout
      should be low enough that the launching thread would not be blocked
      enough time to get killed in case of general shutdown.
      This should be good enough for now, we can make it interruptible
      later if needed.
      58acf87737f4
    • Georges Racinet's avatar
      Heptapod PAAS Runner configuration: fixed inconsistency · 05a640bd76ae
      Georges Racinet authored
      The job trace watch option is very specific and in the
      `[[runners]]` sections, it has to be prefixed (was already
      ok in the list of items not to forward to inner executor)
      05a640bd76ae
    • Georges Racinet's avatar
      Heptapod PAAS Runner: update documentation · e982ba16efce
      Georges Racinet authored
      - the comeback of `concurrent` meaning number of jobs
      - new options related to watching jobs until fully launched.
      e982ba16efce
    • Georges Racinet's avatar
      Heptapod PAAS Runner: wait for jobs activity and throttle provisioning · bbbcc9752f3f
      Georges Racinet authored
      Until this change, we had to assume that a job that had been successfully
      accepted by the PAAS system was out of our hands, at least until we
      don't see it in the coordinator job-by-token endpoint (indicating it
      to be finished).
      
      In reality, this is far from being enough, there are lots of failure
      possibilities that don't report back to the coordinator.
      
      Now, we watch the user-level job log (internal GitLab terminology
      is the job trace) until at least something gets written by the
      provisioned system (the PAAS resource).
      
      This allows us to limit the number of concurrent jobs being
      provisioned, and has also the added benefit to detect errors
      earlier than the GitLab job timeout (which defaults to one hour):
      a default timeout of 5 minutes for the first trace data seems to be
      reasonable, and we provide explicit user feedback about that.
      
      Implementation details:
      
      This means that jobs stay in the pending collection for a much
      longer time, hence the wait has to be interruptible, and the
      corresponding state (resource provisioned, not yet acknowledged
      by coordinator) has to be serialized for the wait to resume on
      restart. This latter point is handled by the `launched` attribute
      on the PAAS Resource (no abstract class for that, only the
      Clever Cloud implementation for now). The same requirement holds
      for the provisioning, and was already fulfilled by the presence
      of the PAAS Resource on the job handle, which was also already
      serialized.
      
      The semantics of the 'launched' word now differ between the
      dispatcher and the PAAS resource. With the former, it means that
      the job has reported back to the coordinator (usually with the line giving
      the GitLab Runner version). This is quite earlier than its actual work
      (typically the user's `script`), but from now on everything is logged
      and it works exactly as a regular GitLab Runner.
      On the other hand, from the point of view of the PAAS Resource,
      'launched' means that the launch command has been sent to the PAAS
      infrastructure. We are running out of synonyms.
      
      Closes #42
      bbbcc9752f3f
    • Georges Racinet's avatar
      Heptapod PAAS Runner: fixed a race condition in startup on loaded state · 919b9e02bfdb
      Georges Racinet authored
      Because launching is currently fast (just a message in a bottle), the
      pending jobs collection is often empty, but that is about to change.
      919b9e02bfdb
    • Georges Racinet's avatar
      PaasRunner: new job_wait_trace method · 8ab1caa90fc3
      Georges Racinet authored
      This method will allow us to watch jobs after launch and assess
      whether they successfully start to actually run or not.
      
      The drawback is that it will need an admin token, which is not
      blocking because we already need one for multi-tenant operations.
      A minor update in Heptapod should allow to avoid this, but it
      opens the subject of making the difference between Heptapod and
      upstream GitLab for better operation on the former while still
      working with the latter.
      8ab1caa90fc3
  4. 11 Jan, 2023 3 commits
  5. 12 Jan, 2023 6 commits
    • Georges Racinet's avatar
      Heptapod PAAS Runner: potential_weight init from state file was missing · bcb77f5f4652
      Georges Racinet authored
      That's of course a bug.
      bcb77f5f4652
    • Georges Racinet's avatar
      PaasDispatcher: made poll intervals part of configuration · 7470b9f102a2
      Georges Racinet authored
      Was somewhat a remnant of early times. The CLI options are
      still in place, explicitely deprecated even though they
      take precedence.
      
      The defaults stay as before this change.
      
      For the coordinators poll loop, we're using the upstream
      configuration item `check_interval`.
      
      The other interval does not need to be prefixed with `paas_`
      because global configuration is not forwarded to inner Heptapod
      Runner processes.
      7470b9f102a2
    • Georges Racinet's avatar
      Heptapod PAAS Runner: longer sleep between poll loops if limits reached · beeb3302d094
      Georges Racinet authored
      If the global job limits are reached, the appropriate time to
      wait before starting a new poll cycle is the job progress interval,
      (typically quite bigger), since we need to wait for some jobs to be
      finished anyway.
      
      Note that each poll cycle will itself avoid to poll coordinators if
      concurrency is reached.
      beeb3302d094
    • Georges Racinet's avatar
      Heptapod PAAS Runner: re-introducing concurrency capping · c1cfa8f7aae8
      Georges Racinet authored
      As testing in the wild showed us, it is important in order to
      protect the coordinator side. It could also be useful in theory
      to protect the cloud infrastructure, if the weighted quota is not
      enough. Part of #32, more details over there.
      
      We're thus giving the `concurrent` configuration item its old meaning,
      with a default value of 50 because 1 is ridiculous for cloud operation.
      
      This makes the code even less thread safe, since we have now two
      counters to update (number of jobs and their total weight).
      The reason why it shouldn't need locking to
      avoid overflowing the limits is that the polling thread has monopoly
      on checking the limits, acquiring new jobs and updating the counters.
      Hence it could be wrong about reaching the limit, but only by not taking
      decreases into account.
      c1cfa8f7aae8
    • Georges Racinet's avatar
      PAASDispatcher: removed confusing comments · b1d9c8c21dbb
      Georges Racinet authored
      These were dating back to the time where events listening
      was performed by `poll_loop()` (or whatever its name at the
      time), and were probably not understandable to anyone but the
      author.
      
      The only reason we check the limits in this loop is now the
      TODO comment (or other potential action that we are not thinking
      of now but would be interesting to perform between cycles)
      b1d9c8c21dbb
    • Georges Racinet's avatar
      Heptapod PAAS Runner: made `quota_computation` config mandatory · 084812cf1aeb
      Georges Racinet authored
      Because we're about to reinstate the maximum *number* of jobs, for which
      we'll use the `concurrent` configuration item, it won't make sense
      to have the weighted quota default to `concurrent`.
      
      We could still keep it optional, but
      
      - at this level of generality, a default value would not mean anything
      - any concrete application will need it, be it for cost capping on
        infinite clouds such as GCE or AWS or to protect small (future usage
        of OSUOSL OpenStack) or medium (Clever Cloud) infrastructures.
      084812cf1aeb
  6. 10 Jan, 2023 2 commits
  7. 09 Jan, 2023 5 commits
  8. 07 Jan, 2023 2 commits
    • Georges Racinet's avatar
      ci/heptapod-docker-images: propagate errors · 1febcf3d0557
      Georges Racinet authored
      First script in which I use the newer `subprocess.run` instead
      of `check_call`. Don't know how I thought it would raise on
      nonzero codes by default.
      
      Rewrapping to set `check=True` keeps the diff minimal, and nothing
      comes to mind that should be allowed to return a nonzero code.
      1febcf3d0557
    • Georges Racinet's avatar
      ci/heptapod-docker-images: cleanup · b6d45f5a967b
      Georges Racinet authored
      It looks like a bug, it smells like a bug but it has
      no real bug in it: `tar_variant` is a leftover that's
      finally not needed (the dash being in the non-empty variant
      itself)
      b6d45f5a967b