1. 09 3月, 2019 1 次提交
  2. 06 2月, 2019 1 次提交
  3. 04 1月, 2019 1 次提交
  4. 03 1月, 2019 1 次提交
    • Z
      Fix SIGSEGV causes by double cleanup of gang. · c9f622eb
      Zhenghua Lyu 提交于
      In function createGang_thread, when createGang fails because
      some segments are down, it will clean all gangs. But the code
      forgot to set CurrentGangCreating to NULL, then it will double
      free it. This commit fix this.
      c9f622eb
  5. 07 12月, 2018 1 次提交
  6. 04 12月, 2018 1 次提交
  7. 15 11月, 2018 1 次提交
  8. 09 11月, 2018 4 次提交
  9. 26 10月, 2018 1 次提交
  10. 18 10月, 2018 1 次提交
  11. 12 10月, 2018 2 次提交
  12. 27 9月, 2018 1 次提交
  13. 22 9月, 2018 1 次提交
  14. 12 9月, 2018 3 次提交
    • K
      ci: fix race condition in gptransfer ccp multi-cluster tests · 2b591741
      Kris Macoskey 提交于
      A race condition was occuring because the set of concourse tasks that
      create the second of two CCP clusters was expecting to find and use the
      `terraform` volume. The `terraform` volume is expected to only be created
      and used in the first set of tasks for the first CCP cluster. If the
      first set of tasks did not complete before the second set then there was
      the potential for the `terraform` volume to not exist yet. This causes
      the job to error in concourse.
      
      The fix is to correct the mistake of the second set of tasks using the
      wrong volume. They should only be using the `terraform2` volume. This
      completely removes the potential for the race condition.
      Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
      2b591741
    • C
      Revert "ci: Temporarily remove ddboost jobs from rc" · 6f447c24
      Chris Hajas 提交于
      This reverts commit 899e933b.
      
      The network connectivity issue with the Data Domain has been resolved.
      6f447c24
    • C
      ci: Temporarily remove ddboost jobs from rc · 899e933b
      Chris Hajas 提交于
      The DDBoost tests require access to an instance that is currently
      experiencing network connectivity issues. We're removing these jobs from
      blocking the release until the networking issues are resolved.
      Authored-by: NChris Hajas <chajas@pivotal.io>
      899e933b
  15. 02 9月, 2018 1 次提交
  16. 09 8月, 2018 1 次提交
  17. 04 8月, 2018 1 次提交
  18. 02 8月, 2018 1 次提交
  19. 29 6月, 2018 1 次提交
  20. 27 6月, 2018 1 次提交
  21. 14 6月, 2018 2 次提交
  22. 07 6月, 2018 1 次提交
    • J
      Implement CPUSET (#5023) · 0e53f33e
      Jialun 提交于
      * Implement CPUSET, a new management of cpu resource in resource
      group which can reserve the specified cores for specified
      resource group exclusively. This can ensure that there are always
      available cpu resources for the group which has set CPUSET.
      The most common scenario is allocating fixed cores for short
      queries.
      
      - One can use it by executing CREATE RESOURCE GROUP xxx WITH (
        cpuset='0-1', xxxx). 0-1 are the reserved cpu cores for
        this group. Or ALTER RESOURCE GROUP SET CPUSET '0,1' to modify
        the value.
      - The syntax of CPUSET is a combination of the tuples, each
        tuple represents one core number or the core numbers interval,
        separated by comma. E.g. 0,1,2-3. All the core in CPUSET must be
        available in system and the core numbers in each group cannot
        have overlap.
      - CPUSET and CPU_RATE_LIMIT are mutually exclusive. One cannot
        create a resource group with both CPUSET and CPU_RATE_LIMIT.
        But the CPUSET and CPU_RATE_LIMIT can be freely switched in
        one group by executing ALTER operation, that means if one
        feature has been set, the other is disabled.
      - The cpu cores will be returned to GPDB, when the group has been
        dropped, or the CPUSET value has been changed, or the CPU_RATE_LIMIT
        has been set.
      - If some of the cores have been allocated to the resource group,
        then the CPU_RATE_LIMIT in other groups only indicating the
        percentage of cpu resources of the left cpu cores.
      - If the GPDB is busy, all the other cores which have not be
        allocated to any resource groups exclusively through CPUSET
        have already been run out, the cpu cores in CPUSET will still
        not be allocated.
      - The cpu cores in CPUSET will be used exclusively only in GPDB
        level, the other non-GPDB processes in system may use them.
      - Add test cases for this new feature, and the test environment
        must contain at least two cpu cores, so we upgrade the configuration
        of instance_type in resource_group jobs.
      
      * - Compatible with the case that cgroup directory cpuset/gpdb
        does not exist
      - Implement pg_dump for cpuset & memory_auditor
      - Fix a typo
      - Change default cpuset value from empty string to -1, for
        the code in 5X assume that all the default value in
        resource group is integer, a non-integer value will make the
        system fail to start
      0e53f33e
  23. 06 6月, 2018 2 次提交
  24. 10 5月, 2018 1 次提交
  25. 09 5月, 2018 1 次提交
  26. 08 5月, 2018 1 次提交
  27. 01 5月, 2018 1 次提交
  28. 19 4月, 2018 1 次提交
    • N
      resgroup: backward compatibility for memory auditor · 23cd8b1e
      Ning Yu 提交于
      Bring back the resgroup memory auditor feature:
      
      - 4354d336
      - 8ede074c
      - 140d4d2e
      
      Memory auditor was a new feature introduced to allow external components
      (e.g. pl/container) managed by resource group.  This feature requires a
      new gpdb dir to be created in cgroup memory controller, however on 5X
      branch unless the users created this new dir manually then the upgrade
      from a previous version would fail.
      
      In this commit we provide backward compatibility by checking the release
      version:
      
      - on 6X and master branches the memory auditor feature is always enabled
        so the new gpdb dir is mandatory;
      - on 5X branch only if the new gpdb dir is created with proper
        permissions the memory auditor feature could be enabled, when it's
        disabled `CREATE RESOURCE GROUP WITH (memory_auditor='cgroup') will fail
        with guide information on how to enable it;
      
      Binary swap tests are also provided to verify backward compatibility in
      future releases.  As cgroup need to be configured to enable resgroup we
      split the resgroup binary swap tests into two parts:
      
      - resqueue mode only tests which can be triggered in the
        icw_gporca_centos6 pipeline job after the ICW tests, these parts have
        no requirements on cgroup;
      - complete resqueue & resgroup modes tests which can be triggered in the
        mpp_resource_group_centos{6,7} pipeline jobs after the resgroup tests,
        these parts need cgroup to be properly configured;
      23cd8b1e
  29. 11 4月, 2018 1 次提交
  30. 03 4月, 2018 1 次提交
    • K
      CCP 2.0 includes the following changes: · a33b39fa
      Kris Macoskey 提交于
      1) CCP migration from AWS to GOOGLE.
      
      CCP jobs (except for jobs need connection to ddboost and netbackup) now no
      longer need external workers, therefore ccp tags for external workers are
      removed.
      
      The tfstate backend for AWS and GOOGLE are stored seperatedly on s3 bucket,
      `clusters-aws/` for aws and `clusters-google/` for google, set_failed are also
      different between the two cloud providers.
      
      2) Separate gpinitsystem from the gen_cluster task
      
      When failures occur in production for gpinitsystem itself, it is important for
      a developer to be able to quickly distinguish whether it is a CCP failure, or a
      problem with the binaries used to init the GPDB cluster. By separating the
      tasks, it is easier to see when gpinit itself has failed.
      
      3) The path to scripts used in CCP has changed
      
      Instead of all of the generic scripts being in `ccp_src/aws/` they are now in a
      better location of `ccp_src/scripts/`.
      
      4) Paramater names have changed
      
      platform is now PLATFORM for all references in CCP jobs
      
      5) NVME jobs
      
      Jobs that used NVME in AWS have been migrated to an identical feature for NVME
      in GCP but this does include a change to the terraform path specified in the
      job.
      
      6) Instance types mapping from Ec2 to GCE
      
      The new paramater name for specifying instance type in GCP jobs is
      `instance_type`.  There is not always a 1:1 match for instance types so there
      are slight differences in available resources for some jobs.
      Signed-off-by: NAlexandra Wang <lewang@pivotal.io>
      a33b39fa
  31. 29 3月, 2018 1 次提交
  32. 21 3月, 2018 1 次提交