heptapod-paas-runner: handle job request conflicts
We got this yesterday with the Clever Cloud runner for this very instance:
Nov 25 22:38:10 heptapod-foss heptapod-paas-runner[478946]: INFO:heptapod_runner.paas_dispatcher:No (more) job to process, polling for all runners again in 3 seconds
Nov 25 22:38:13 heptapod-foss heptapod-paas-runner[478946]: INFO:heptapod_runner.paas_dispatcher:No (more) job to process, polling for all runners again in 3 seconds
Nov 25 22:38:16 heptapod-foss heptapod-paas-runner[478946]: INFO:heptapod_runner.paas_dispatcher:No (more) job to process, polling for all runners again in 3 seconds
Nov 25 22:38:19 heptapod-foss heptapod-paas-runner[478946]: INFO:heptapod_runner.paas_dispatcher:Launching job 268977 for runner 'ee4cgV2g'
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: INFO:heptapod_runner.paas_dispatcher:Launching job 268978 for runner 'ee4cgV2g'
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: ERROR:heptapod_runner.paas_dispatcher:Uncatched exception in main thread. Will exit right away with abnormal termination code
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: Traceback (most recent call last):
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: File "/home/heptapod-runner/venv/lib/python3.9/site-packages/heptapod_runner/paas_dispatcher.py", line 617, in main
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: done_cycles = dispatcher.poll_loop(cl_args.poll_interval,
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: File "/home/heptapod-runner/venv/lib/python3.9/site-packages/heptapod_runner/paas_dispatcher.py", line 476, in poll_loop
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: self.poll_all_launch()
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: File "/home/heptapod-runner/venv/lib/python3.9/site-packages/heptapod_runner/paas_dispatcher.py", line 364, in poll_all_launch
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: job_json = runner.request_job()
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: File "/home/heptapod-runner/venv/lib/python3.9/site-packages/heptapod_runner/runner.py", line 141, in request_job
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: raise GitLabUnexpectedError(status_code=resp.status_code,
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: heptapod_runner.exceptions.GitLabUnexpectedError: ('https://foss.heptapod.net/api/v4/jobs/request', 409, None, '{"message":"409 Co
nflict"}')
Nov 25 22:38:20 heptapod-foss heptapod-paas-runner[478946]: INFO:heptapod_runner.paas_dispatcher:Waiting (at most 60 seconds) for all threads to finish and report back before exit
Nov 25 22:38:22 heptapod-foss heptapod-paas-runner[581192]: remote: [SUCCESS] The application has successfully been queued for redeploy.
Nov 25 22:38:22 heptapod-foss heptapod-paas-runner[581236]: remote: [SUCCESS] The application has successfully been queued for redeploy.
Nov 25 22:38:22 heptapod-foss heptapod-paas-runner[478946]: INFO:heptapod_runner.paas_dispatcher:Successfullly launched job 268977 for runner 'ee4cgV2g'
Nov 25 22:38:22 heptapod-foss heptapod-paas-runner[478946]: INFO:heptapod_runner.paas_dispatcher:Successfullly launched job 268978 for runner 'ee4cgV2g'
Nov 25 22:38:22 heptapod-foss heptapod-paas-runner[478946]: INFO:heptapod_runner.paas_dispatcher:Saving state to '/home/heptapod-runner/state/paas-runner-state.json'
Nov 25 22:38:22 heptapod-foss heptapod-paas-runner[478946]: INFO:heptapod_runner.paas_dispatcher:Saved state to '/home/heptapod-runner/state/paas-runner-state.json'
Nov 25 22:38:22 heptapod-foss systemd[1]: heptapod-paas-runner.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 22:38:22 heptapod-foss systemd[1]: heptapod-paas-runner.service: Failed with result 'exit-code'.
Nov 25 22:38:22 heptapod-foss systemd[1]: heptapod-paas-runner.service: Consumed 5min 9.753s CPU time.
As the error log promised, this actually stopped the runner (state persistence did its job). It was then restarted by systemd.