- 17 Jan, 2023 1 commit
-
-
Georges Racinet authored
The x.y.0 tags seem to always be on the main branch, meaning that we can merge them directly. Conflicts resolution: - `docker_command_test.go` was entirely renamed upstream as `docker_command_integration_test.go` - `Makefile` and `ci/version` have upstream changes relevant to Git only. - minor wording in `helpers/container/helperimage/info.go` Adaptations: - `executors/docker/internal/volumes/manager_integration_test.go`: introduced in 13.11, with using commit ids and refs in the testing repo that are outdated. Had to change them to the current ones (compare, e.g., v15.7.0)
-
- 14 Jan, 2023 10 commits
-
-
Georges Racinet authored
-
Georges Racinet authored
-
Georges Racinet authored
It might have been removed by external means, or even right before a hard shutdown.
-
Georges Racinet authored
Nice one!
-
Georges Racinet authored
In case a job has an already launched PAAS resource (not to be confused with the job being fully launched, we really need to come up with a change in terminology), before this change the log would have said "Launching job.." but that was just acknowledging the start of `launch_job` which itself may do nothing else than resuming the tracking of the job trace.
-
Georges Racinet authored
We simply weren't loading the `launched` attribute, resulting in failed Git pushes (already mitigated by parent changeset).
-
Georges Racinet authored
In theory, Heptapod PAAS Runner state tracking should avoid attempting to deploy again an application. In practice, we currently have a silly bug about that (making easy to test for real that this change works). The bug will be fixed in next commit, leaving hopefully only exceptional cases (hard shutdown).
-
Georges Racinet authored
Using the new format already in service in the debug signal. Also, pending jobs are currently especially interesting, since we just implemented #42 and #39, hence logging their details at the `INFO` level.
-
Georges Racinet authored
Now that the launcher threads are long (they wait for full job startub, see #42) and interruptible, waiting for them has become useless: all it was doing was to block shutdown until general timout, hence creating doubt about the consequences for ininterruptible operations (there weren't any, actually).
-
Georges Racinet authored
The contract of `repr()` for `JobHandle` is not really appropriate, as we don't want to log the token. On the other hand we really nead more information about the state, and this is used with `--debug-signal`
-
- 13 Jan, 2023 11 commits
-
-
Georges Racinet authored
-
Georges Racinet authored
This was intended to be the content of 05a640bd76ae but I have obviously messed up and it ended up missing.
-
Georges Racinet authored
-
Georges Racinet authored
-
Georges Racinet authored
Not sure a corresponding Heptapod Runner rc6 will be made, we'll see.
-
Georges Racinet authored
Closes #39 Up to now, we didn't have to do it because the Git repo creation is actually so fast that we thought it to be synchronous. This was exposed during a scheduled maintenance on Clever Git deployment repos. The timeout and wait steps can be tweaked, but it shouldn't be necessary. Also, this is a non interruptible wait (we would have to introduce a serializable information to make it interruptible), but the timeout should be low enough that the launching thread would not be blocked enough time to get killed in case of general shutdown. This should be good enough for now, we can make it interruptible later if needed.
-
Georges Racinet authored
The job trace watch option is very specific and in the `[[runners]]` sections, it has to be prefixed (was already ok in the list of items not to forward to inner executor)
-
Georges Racinet authored
- the comeback of `concurrent` meaning number of jobs - new options related to watching jobs until fully launched.
-
Georges Racinet authored
Until this change, we had to assume that a job that had been successfully accepted by the PAAS system was out of our hands, at least until we don't see it in the coordinator job-by-token endpoint (indicating it to be finished). In reality, this is far from being enough, there are lots of failure possibilities that don't report back to the coordinator. Now, we watch the user-level job log (internal GitLab terminology is the job trace) until at least something gets written by the provisioned system (the PAAS resource). This allows us to limit the number of concurrent jobs being provisioned, and has also the added benefit to detect errors earlier than the GitLab job timeout (which defaults to one hour): a default timeout of 5 minutes for the first trace data seems to be reasonable, and we provide explicit user feedback about that. Implementation details: This means that jobs stay in the pending collection for a much longer time, hence the wait has to be interruptible, and the corresponding state (resource provisioned, not yet acknowledged by coordinator) has to be serialized for the wait to resume on restart. This latter point is handled by the `launched` attribute on the PAAS Resource (no abstract class for that, only the Clever Cloud implementation for now). The same requirement holds for the provisioning, and was already fulfilled by the presence of the PAAS Resource on the job handle, which was also already serialized. The semantics of the 'launched' word now differ between the dispatcher and the PAAS resource. With the former, it means that the job has reported back to the coordinator (usually with the line giving the GitLab Runner version). This is quite earlier than its actual work (typically the user's `script`), but from now on everything is logged and it works exactly as a regular GitLab Runner. On the other hand, from the point of view of the PAAS Resource, 'launched' means that the launch command has been sent to the PAAS infrastructure. We are running out of synonyms. Closes #42
-
Georges Racinet authored
Because launching is currently fast (just a message in a bottle), the pending jobs collection is often empty, but that is about to change.
-
Georges Racinet authored
This method will allow us to watch jobs after launch and assess whether they successfully start to actually run or not. The drawback is that it will need an admin token, which is not blocking because we already need one for multi-tenant operations. A minor update in Heptapod should allow to avoid this, but it opens the subject of making the difference between Heptapod and upstream GitLab for better operation on the former while still working with the latter.
-
- 11 Jan, 2023 3 commits
-
-
Georges Racinet authored
Let's not make a violent burst of API calls when doing big cleanups. In the tests, we do not want to impose a delay of 20 seconds (bigger than the full run), so we monkey-patch `time.sleep`
-
Georges Racinet authored
-
Georges Racinet authored
There was no reason for it to be a module function, it will be more natural for stuff such as, e.g, make it interruptible.
-
- 12 Jan, 2023 6 commits
-
-
Georges Racinet authored
That's of course a bug.
-
Georges Racinet authored
Was somewhat a remnant of early times. The CLI options are still in place, explicitely deprecated even though they take precedence. The defaults stay as before this change. For the coordinators poll loop, we're using the upstream configuration item `check_interval`. The other interval does not need to be prefixed with `paas_` because global configuration is not forwarded to inner Heptapod Runner processes.
-
Georges Racinet authored
If the global job limits are reached, the appropriate time to wait before starting a new poll cycle is the job progress interval, (typically quite bigger), since we need to wait for some jobs to be finished anyway. Note that each poll cycle will itself avoid to poll coordinators if concurrency is reached.
-
Georges Racinet authored
As testing in the wild showed us, it is important in order to protect the coordinator side. It could also be useful in theory to protect the cloud infrastructure, if the weighted quota is not enough. Part of #32, more details over there. We're thus giving the `concurrent` configuration item its old meaning, with a default value of 50 because 1 is ridiculous for cloud operation. This makes the code even less thread safe, since we have now two counters to update (number of jobs and their total weight). The reason why it shouldn't need locking to avoid overflowing the limits is that the polling thread has monopoly on checking the limits, acquiring new jobs and updating the counters. Hence it could be wrong about reaching the limit, but only by not taking decreases into account.
-
Georges Racinet authored
These were dating back to the time where events listening was performed by `poll_loop()` (or whatever its name at the time), and were probably not understandable to anyone but the author. The only reason we check the limits in this loop is now the TODO comment (or other potential action that we are not thinking of now but would be interesting to perform between cycles)
-
Georges Racinet authored
Because we're about to reinstate the maximum *number* of jobs, for which we'll use the `concurrent` configuration item, it won't make sense to have the weighted quota default to `concurrent`. We could still keep it optional, but - at this level of generality, a default value would not mean anything - any concrete application will need it, be it for cost capping on infinite clouds such as GCE or AWS or to protect small (future usage of OSUOSL OpenStack) or medium (Clever Cloud) infrastructures.
-
- 10 Jan, 2023 2 commits
-
-
Georges Racinet authored
Also perhaps a bit clearer to other people than me.
-
Georges Racinet authored
We especially don't want the integration tests to run on MRs involving the Python subsystem only.
-
- 09 Jan, 2023 5 commits
-
-
Georges Racinet authored
Apparently, upstream is using dedicated jobs rather than Make targets for these preparations, see e.g, job `clone test repo` in `.gitlab/ci/prebuild.gitlab-ci.yml`. For now this simple conditional removal should be good enough for us.
-
Georges Racinet authored
Done as a comment here, for lack of better options known to us.
-
Georges Racinet authored
We don't bother too much about mutualization with the (also renamed) `unit-golang` job, because both will change in a not so far future (upstream v14.0.0 introduces golang tests flags for integration tests and runs them in separate jobs).
-
Georges Racinet authored
-
Georges Racinet authored
We expect the failures to be due to the testing environment, but could not find the root cause yet. This scenario is much mutualized with other executors (see `common/buildtest/abort.go`) and passes for them. There should be no difference between Heptapod Runner and upstream GitLab Runner with respect to that. Skipping should allow us to run the integrations tests in CI, which is a net gain.
-
- 07 Jan, 2023 2 commits
-
-
Georges Racinet authored
First script in which I use the newer `subprocess.run` instead of `check_call`. Don't know how I thought it would raise on nonzero codes by default. Rewrapping to set `check=True` keeps the diff minimal, and nothing comes to mind that should be allowed to return a nonzero code.
-
Georges Racinet authored
It looks like a bug, it smells like a bug but it has no real bug in it: `tar_variant` is a leftover that's finally not needed (the dash being in the non-empty variant itself)
-