1. 11 Jan, 2019 1 commit
  2. 07 Jan, 2019 5 commits
    • Gabriel Mazetto's avatar
      Only set as `read_only` when starting the per-project migration · 10d9a7b13e9f
      Gabriel Mazetto authored
      In the previous code, we locked the project during the migration
      scheduling step, which works fine for small setups, but can be
      problematic in really big installations.
      
      We now moved the logic to inside the worker, so we minimize the time a
      project will be read-only. We also make sure we only do that if
      reference counter is `0` (no current operation is in progress).
      10d9a7b13e9f
    • Peter Leitzen's avatar
      Implement error tracking configuration · 6334ce10dfe7
      Peter Leitzen authored
      Re-use operations controller which already handles tracing settings.
      6334ce10dfe7
    • Reuben Pereira's avatar
      Add table and model for error tracking settings · 56c4cbadc13e
      Reuben Pereira authored
      56c4cbadc13e
    • Steve Azzopardi's avatar
      Create `get_build` for project model · 4f7020aeb1af
      Steve Azzopardi authored
      Inside of `Projects::ArtifactsController` and
      `Projects::BuildArtifactsController` we fetching the build by id using
      active record directly which violates `CodeReuse/ActiveRecord` rubocop
      rule. Create `get_build` inside of `project` model which does the same
      thing.
      4f7020aeb1af
    • Steve Azzopardi's avatar
      Refactor project.latest_successful_builds_for def · 50a1dd6a79ff
      Steve Azzopardi authored
      `project.latest_successful_builds_for(ref)` is being used to find a
      single job all the time. This results into us having to call `find_by`
      inside of the controller which violates our CodeReuse/ActiveRecord
      rubocop rule.
      
      Refactor `project.latest_successful_builds_for(ref)` to
      `project.latest_successful_build_for(job_name, ref)` which will execute
      the `find_by` inside of the model.
      
      Also create `project.latest_successful_build_for!(job_name, ref)` which
      raises an exception instead of returning nil.
      50a1dd6a79ff
  3. 04 Jan, 2019 2 commits
  4. 03 Jan, 2019 3 commits
  5. 02 Jan, 2019 3 commits
  6. 21 Dec, 2018 1 commit
  7. 19 Dec, 2018 3 commits
  8. 13 Dec, 2018 1 commit
  9. 12 Dec, 2018 1 commit
  10. 11 Dec, 2018 1 commit
    • Yorick Peterse's avatar
      Refactor Project#create_or_update_import_data · a96301852960
      Yorick Peterse authored
      In https://gitlab.com/gitlab-org/release/framework/issues/28 we found
      that this method was changed a lot over the years: 43 times if our
      calculations were correct. Looking at the method, it had quite a few
      branches going on:
      
          def create_or_update_import_data(data: nil, credentials: nil)
            return if data.nil? && credentials.nil?
      
            project_import_data = import_data || build_import_data
      
            if data
              project_import_data.data ||= {}
              project_import_data.data = project_import_data.data.merge(data)
            end
      
            if credentials
              project_import_data.credentials ||= {}
              project_import_data.credentials =
                project_import_data.credentials.merge(credentials)
            end
      
            project_import_data
          end
      
      If we turn the || and ||= operators into regular if statements, we can
      see a bit more clearly that this method has quite a lot of branches in
      it:
      
          def create_or_update_import_data(data: nil, credentials: nil)
            if data.nil? && credentials.nil?
              return
            else
              project_import_data =
                if import_data
                  import_data
                else
                  build_import_data
                end
      
              if data
                if project_import_data.data
                  # nothing
                else
                  project_import_data.data = {}
                end
      
                project_import_data.data =
                  project_import_data.data.merge(data)
              end
      
              if credentials
                if project_import_data.credentials
                  # nothing
                else
                  project_import_data.credentials = {}
                end
      
                project_import_data.credentials =
                  project_import_data.credentials.merge(credentials)
              end
      
              project_import_data
            end
          end
      
      The number of if statements and branches here makes it easy to make
      mistakes. To resolve this, we refactor this code in such a way that we
      can get rid of all but the first `if data.nil? && credentials.nil?`
      statement. We can do this by simply sending `to_h` to `nil` in the right
      places, which removes the need for statements such as `if data`.
      
      Since this data gets written to a database, in ProjectImportData we do
      make sure to not write empty Hash values. This requires an `unless`
      (which is really a `if !`), but the resulting code is still very easy to
      read.
      a96301852960
  11. 08 Dec, 2018 14 commits
  12. 07 Dec, 2018 2 commits
    • Zeger-Jan van de Weg's avatar
      Allow public forks to be deduplicated · f837c22a38b1
      Zeger-Jan van de Weg authored
      When a project is forked, the new repository used to be a deep copy of everything
      stored on disk by leveraging `git clone`. This works well, and makes isolation
      between repository easy. However, the clone is at the start 100% the same as the
      origin repository. And in the case of the objects in the object directory, this
      is almost always going to be a lot of duplication.
      
      Object Pools are a way to create a third repository that essentially only exists
      for its 'objects' subdirectory. This third repository's object directory will be
      set as alternate location for objects. This means that in the case an object is
      missing in the local repository, git will look in another location. This other
      location is the object pool repository.
      
      When Git performs garbage collection, it's smart enough to check the
      alternate location. When objects are duplicated, it will allow git to
      throw one copy away. This copy is on the local repository, where to pool
      remains as is.
      
      These pools have an origin location, which for now will always be a
      repository that itself is not a fork. When the root of a fork network is
      forked by a user, the fork still clones the full repository. Async, the
      pool repository will be created.
      
      Either one of these processes can be done earlier than the other. To
      handle this race condition, the Join ObjectPool operation is
      idempotent. Given its idempotent, we can schedule it twice, with the
      same effect.
      
      To accommodate the holding of state two migrations have been added.
      1. Added a state column to the pool_repositories column. This column is
      managed by the state machine, allowing for hooks on transitions.
      2. pool_repositories now has a source_project_id. This column in
      convenient to have for multiple reasons: it has a unique index allowing
      the database to handle race conditions when creating a new record. Also,
      it's nice to know who the host is. As that's a short link to the fork
      networks root.
      
      Object pools are only available for public project, which use hashed
      storage and when forking from the root of the fork network. (That is,
      the project being forked from itself isn't a fork)
      
      In this commit message I use both ObjectPool and Pool repositories,
      which are alike, but different from each other. ObjectPool refers to
      whatever is on the disk stored and managed by Gitaly. PoolRepository is
      the record in the database.
      f837c22a38b1
    • Steve Azzopardi's avatar
      Add endpoint to download single artifact by ref · f5acf8e2b0ff
      Steve Azzopardi authored
      Add a new endpoint
      `projects/:id/jobs/artifacts/:ref_name/raw/*artifact_path?job=name`
      which is the close the web URL for consistency sake. This endpoint can
      be used to download a single file from artifacts for the specified ref
      and job.
      
      closes https://gitlab.com/gitlab-org/gitlab-ce/issues/54626
      f5acf8e2b0ff
  13. 06 Dec, 2018 2 commits
  14. 05 Dec, 2018 1 commit