1. 12 Jul, 2019 1 commit
  2. 11 Jul, 2019 2 commits
  3. 10 Jul, 2019 7 commits
  4. 09 Jul, 2019 4 commits
  5. 08 Jul, 2019 4 commits
    • Mayra Cabrera's avatar
      Schedule namespace aggregation in other contexts · 023e16924ccc
      Mayra Cabrera authored
      Schedules a Namespace::AggregationSchedule worker if some of the project
      statistics are refreshed.
      
      The worker is only executed if the feature flag is enabled.
      023e16924ccc
    • Fabio Pitino's avatar
      Allow ReactiveCaching to support nil value · a73430d2cec7
      Fabio Pitino authored
      When :calculate_reactive_caching returns a nil value
      this caused ReactiveCaching to schedule a worker
      every time the code using :with_reactive_cache was called.
      
      This issue caused an increasing amount of Sidekiq jobs
      being created continuously.
      
      Implementing this fix behind feature flag
      :reactive_caching_check_key_exists
      a73430d2cec7
    • Shinya Maeda's avatar
      Fix race condition on merge train ref generation · bb3feaf9bef9
      Shinya Maeda authored
      Today, Pipelines for merge train run on `refs/merge`,
      however, this causes a race condition that it can be
      overwritten by CheckMergeabilityService.
      
      This patch fixes the problem by generating `refs/train`
      for those pipelines.
      bb3feaf9bef9
    • Thong Kuah's avatar
      Create CTE query for clusters hierarchy · 10f67e886aca
      Thong Kuah authored
      - This enables us to use a scope to query all clusters in group
      hierarchy order in one query, and also enables us to union to instance
      clusters later.
      
      - Handle case where clusters not present at level. In which case the
      query should go ahead and return the next level's clusters.
      
      - Swap with new CTE query behind Feature flag. This FF is default
      disabled.
      10f67e886aca
  6. 07 Jul, 2019 6 commits
    • Thong Kuah's avatar
      BE feedback: memoize deployment_metrics · e8f01b903b6a
      Thong Kuah authored
      Also memoize has_metrics? as well, that might be expensive, and it
      should not change for the lifetime of EnvironmentStatus
      e8f01b903b6a
    • Thong Kuah's avatar
      Restore fallback to deployment_platform_cluster · d135d2af5ad4
      Thong Kuah authored
      In 12.2 we will remove this fallback.
      d135d2af5ad4
    • Thong Kuah's avatar
      Remove un-used method · 15b1daa53db0
      Thong Kuah authored
      We stopped calling the fallback so we can remove this now
      15b1daa53db0
    • Thong Kuah's avatar
      Extract deployment_metrics into own object · 2e312a9c658c
      Thong Kuah authored
      We can now share project so that we don't have to load project twice.
      Also, this extracts non-relevant logic out of Deployment.
      
      Update DeploymentsController accordingly
      2e312a9c658c
    • Thong Kuah's avatar
      Share project object in EnvironmentStatus · 31b10a8e3b5d
      Thong Kuah authored
      Otherwise, each EnvironmentStatus object instantiates its own project
      when really they are the same. Improves query count performance
      31b10a8e3b5d
    • Thong Kuah's avatar
      Remove fallback to project.deployment_platform · 4132fd83dca8
      Thong Kuah authored
      This improves query performance of
      MergeRequestsController#ci_environments_status a lot.
      
      However this means old deployments that deployed to kubernetes clusters
      with prometheus installations will no longer show performance metrics as
      we cannot backfill cluster_id from deployment_platform with certainty
      (clusters may be edited/added/deleted, which changes the results of
      deployment_platform).
      4132fd83dca8
  7. 06 Jul, 2019 1 commit
    • Stan Hu's avatar
      Prevent amplification of ReactiveCachingWorker jobs upon failures · c8b7c082e639
      Stan Hu authored
      When `ReactiveCachingWorker` hits an SSL or other exception that occurs
      quickly and reliably, automatically rescheduling a new worker could lead
      to excessive number of jobs being scheduled. This happens because not
      only does the failed job get rescheduled in a minute, but each Sidekiq
      retry will also add even more rescheduled jobs.
      
      In busy instances, this can become an issue because large numbers of
      `ReactiveCachingWorker` running can cause high rates of `ExclusiveLease`
      reads to occur and possibly saturate the Redis server with queries.
      
      We now disable this automatic retry and rely on Sidekiq to perform its 3
      retries with a backoff period.
      
      Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/64176
      c8b7c082e639
  8. 05 Jul, 2019 2 commits
  9. 04 Jul, 2019 6 commits
  10. 03 Jul, 2019 6 commits
  11. 02 Jul, 2019 1 commit