- 09 3月, 2019 1 次提交
-
-
由 Jingyi Mei 提交于
-
- 06 2月, 2019 1 次提交
-
-
由 Huiliang Liu 提交于
The AIX server in not available, so we remove the test job to release candidate. This is temp change until the server is available.
-
- 04 1月, 2019 1 次提交
-
-
由 Jason Vigil 提交于
[#162487473] Co-authored-by: NJason Vigil <jvigil@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
- 03 1月, 2019 1 次提交
-
-
由 Zhenghua Lyu 提交于
In function createGang_thread, when createGang fails because some segments are down, it will clean all gangs. But the code forgot to set CurrentGangCreating to NULL, then it will double free it. This commit fix this.
-
- 07 12月, 2018 1 次提交
-
-
由 Jason Vigil 提交于
[#160797677] Co-authored-by: NJason Vigil <jvigil@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
- 04 12月, 2018 1 次提交
-
-
由 Larry Hamel 提交于
- Moved that task to gp4k8s pipeline - add ubuntu ent to `gate_compile_end` - rename `compile_gpdb_open_source_ubuntu16` to `compile_gpdb_ubuntu16_oss` Co-authored-by: NLarry Hamel <lhamel@pivotal.io> Co-authored-by: NJemish Patel <jpatel@pivotal.io>
-
- 15 11月, 2018 1 次提交
-
-
由 Jason Vigil 提交于
-
- 09 11月, 2018 4 次提交
-
-
由 Larry Hamel 提交于
Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NLarry Hamel <lhamel@pivotal.io> Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NFei Yang <fyang@pivotal.io>
-
由 Xin Zhang 提交于
Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NFei Yang <fyang@pivotal.io> Co-authored-by: NLarry Hamel <lhamel@pivotal.io> Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 David Sharp 提交于
Authored-by: NDavid Sharp <dsharp@pivotal.io>
-
由 David Sharp 提交于
Authored-by: NDavid Sharp <dsharp@pivotal.io>
-
- 26 10月, 2018 1 次提交
-
-
由 David Sharp 提交于
The _gcc_6_3 tag had been updated to _gcc_6_4, but to avoid missing updates in the future, instead of bumping the version here, the tag has been renamed to simply :16.04 Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NFei Yang <fyang@pivotal.io> (cherry picked from commit 1335b0bb)
-
- 18 10月, 2018 1 次提交
-
-
由 Francisco Guerrero 提交于
- Remove regression_tests_pxf job from 5X_STABLE pipeline - As part of moving to the gp-integration-testing pipeline, PXF no longer needs to be in the 5X_STABLE pipeline - CI: Remove unused compile_gpdb_pxf task - Regenerate pipeline to remove PXF job
-
- 12 10月, 2018 2 次提交
-
-
由 Nadeem Ghani 提交于
These jobs were running in a gpdb4 external worker. Moved them to run with other gpdb5 external worker. Co-authored-by: NChris Hajas <chajas@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Francisco Guerrero 提交于
- PXF is a public repository and no longer requires a private key to access it. In the pipeline template, we remove the pxf-git-key property Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io> Co-authored-by: NDivya Bhargov <dbhargov@pivotal.io>
-
- 27 9月, 2018 1 次提交
-
-
由 Lav Jain 提交于
Co-authored-by: NLav Jain <ljain@pivotal.io> Co-authored-by: NDivya Bhargov <dbhargov@pivotal.io>
-
- 22 9月, 2018 1 次提交
-
-
由 Karen Huddleston 提交于
These jobs are not resource intensive and the pipeline is not running as often as it used to now so we should be able to run them on every commit. We will keep the nightly triggers on ddboost and netbackup jobs because we don't want to overrun the servers. Authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 12 9月, 2018 3 次提交
-
-
由 Kris Macoskey 提交于
A race condition was occuring because the set of concourse tasks that create the second of two CCP clusters was expecting to find and use the `terraform` volume. The `terraform` volume is expected to only be created and used in the first set of tasks for the first CCP cluster. If the first set of tasks did not complete before the second set then there was the potential for the `terraform` volume to not exist yet. This causes the job to error in concourse. The fix is to correct the mistake of the second set of tasks using the wrong volume. They should only be using the `terraform2` volume. This completely removes the potential for the race condition. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Chris Hajas 提交于
This reverts commit 899e933b. The network connectivity issue with the Data Domain has been resolved.
-
由 Chris Hajas 提交于
The DDBoost tests require access to an instance that is currently experiencing network connectivity issues. We're removing these jobs from blocking the release until the networking issues are resolved. Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 02 9月, 2018 1 次提交
-
-
由 Lav Jain 提交于
-
- 09 8月, 2018 1 次提交
-
-
由 Nadeem Ghani 提交于
-
- 04 8月, 2018 1 次提交
-
-
由 Jason Vigil 提交于
-
- 02 8月, 2018 1 次提交
-
-
由 Trevor Yacovone 提交于
Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
- 29 6月, 2018 1 次提交
-
-
由 Jamie McAtamney 提交于
This is related to the work we have done to fix the sles11 and windows compilation failures on master. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NLisa Oakley <loakley@pivotal.io>
-
- 27 6月, 2018 1 次提交
-
-
由 Alexander Denissov 提交于
Added new test job to the pipeline to certify GPHDFS with MAPR Hadoop distribution and renamed existing GPHDFS certification job to state that it tests with generic Hadoop. MAPR cluster consists of 1 node deployed by CCP scripts into GCE. Backported from GPDB master. - MAPR 5.2 - Parquet 1.8.1 Co-authored-by: NAlexander Denissov <adenissov@pivotal.io> Co-authored-by: NShivram Mani <smani@pivotal.io> Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
- 14 6月, 2018 2 次提交
-
-
由 Kris Macoskey 提交于
The netbackup jobs are paused because of an expired license, so the centos6 resource is not passing for the release candidate. Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Lisa Oakley 提交于
Netbackup test depends on an external resource with a valid license. The license has expired. We're removing the jobs from blocking the release candidate until the license is renewed. Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
- 07 6月, 2018 1 次提交
-
-
由 Jialun 提交于
* Implement CPUSET, a new management of cpu resource in resource group which can reserve the specified cores for specified resource group exclusively. This can ensure that there are always available cpu resources for the group which has set CPUSET. The most common scenario is allocating fixed cores for short queries. - One can use it by executing CREATE RESOURCE GROUP xxx WITH ( cpuset='0-1', xxxx). 0-1 are the reserved cpu cores for this group. Or ALTER RESOURCE GROUP SET CPUSET '0,1' to modify the value. - The syntax of CPUSET is a combination of the tuples, each tuple represents one core number or the core numbers interval, separated by comma. E.g. 0,1,2-3. All the core in CPUSET must be available in system and the core numbers in each group cannot have overlap. - CPUSET and CPU_RATE_LIMIT are mutually exclusive. One cannot create a resource group with both CPUSET and CPU_RATE_LIMIT. But the CPUSET and CPU_RATE_LIMIT can be freely switched in one group by executing ALTER operation, that means if one feature has been set, the other is disabled. - The cpu cores will be returned to GPDB, when the group has been dropped, or the CPUSET value has been changed, or the CPU_RATE_LIMIT has been set. - If some of the cores have been allocated to the resource group, then the CPU_RATE_LIMIT in other groups only indicating the percentage of cpu resources of the left cpu cores. - If the GPDB is busy, all the other cores which have not be allocated to any resource groups exclusively through CPUSET have already been run out, the cpu cores in CPUSET will still not be allocated. - The cpu cores in CPUSET will be used exclusively only in GPDB level, the other non-GPDB processes in system may use them. - Add test cases for this new feature, and the test environment must contain at least two cpu cores, so we upgrade the configuration of instance_type in resource_group jobs. * - Compatible with the case that cgroup directory cpuset/gpdb does not exist - Implement pg_dump for cpuset & memory_auditor - Fix a typo - Change default cpuset value from empty string to -1, for the code in 5X assume that all the default value in resource group is integer, a non-integer value will make the system fail to start
-
- 06 6月, 2018 2 次提交
-
-
由 Alexandra Wang 提交于
A gate job is added for Release Candidate to make sure that all the release candidate jobs passed for gpdb_src and bin_gpdb for centos6, centos7 and sles11 platform. The Release_Candidate job verifies that the commit SHA of gpdb_src and all the bin_gpdb resources are the same. If the versions don't match, the job will fail. The bin_gpdb_[platform]_rc resources are put to a stable builds bucket so that they can be consumed by integration and components pipelines Authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Alexandra Wang 提交于
Authored-by: NAlexandra Wang <lewang@pivotal.io>
-
- 10 5月, 2018 1 次提交
-
-
由 Karen Huddleston 提交于
With ensure, the destroy step will run even if a previous step fails or if the job is terminated early. Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io> Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
- 09 5月, 2018 1 次提交
-
-
由 Karen Huddleston 提交于
These jobs no longer belong to the Data Protection team. It is confusing to have them labeled with DPM. Authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 08 5月, 2018 1 次提交
-
-
由 Jason Vigil 提交于
The other two platforms (centos6 and sles11) are currently being exported. All three need to be consumed in the gp-integration-testing pipeline. Co-authored-by: NJason Vigil <jvigil@pivotal.io> Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
- 01 5月, 2018 1 次提交
-
-
由 Kris Macoskey 提交于
We test planner and orca on each platform, this one was missing. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
- 19 4月, 2018 1 次提交
-
-
由 Ning Yu 提交于
Bring back the resgroup memory auditor feature: - 4354d336 - 8ede074c - 140d4d2e Memory auditor was a new feature introduced to allow external components (e.g. pl/container) managed by resource group. This feature requires a new gpdb dir to be created in cgroup memory controller, however on 5X branch unless the users created this new dir manually then the upgrade from a previous version would fail. In this commit we provide backward compatibility by checking the release version: - on 6X and master branches the memory auditor feature is always enabled so the new gpdb dir is mandatory; - on 5X branch only if the new gpdb dir is created with proper permissions the memory auditor feature could be enabled, when it's disabled `CREATE RESOURCE GROUP WITH (memory_auditor='cgroup') will fail with guide information on how to enable it; Binary swap tests are also provided to verify backward compatibility in future releases. As cgroup need to be configured to enable resgroup we split the resgroup binary swap tests into two parts: - resqueue mode only tests which can be triggered in the icw_gporca_centos6 pipeline job after the ICW tests, these parts have no requirements on cgroup; - complete resqueue & resgroup modes tests which can be triggered in the mpp_resource_group_centos{6,7} pipeline jobs after the resgroup tests, these parts need cgroup to be properly configured;
-
- 11 4月, 2018 1 次提交
-
-
由 David Sharp 提交于
Similar to bin_gpdb_centos6_icw_green, this can be used by downstream builds that need a GPDB that is passing tests. See also PR #4399 Authored-by: NDavid Sharp <dsharp@pivotal.io>
-
- 03 4月, 2018 1 次提交
-
-
由 Kris Macoskey 提交于
1) CCP migration from AWS to GOOGLE. CCP jobs (except for jobs need connection to ddboost and netbackup) now no longer need external workers, therefore ccp tags for external workers are removed. The tfstate backend for AWS and GOOGLE are stored seperatedly on s3 bucket, `clusters-aws/` for aws and `clusters-google/` for google, set_failed are also different between the two cloud providers. 2) Separate gpinitsystem from the gen_cluster task When failures occur in production for gpinitsystem itself, it is important for a developer to be able to quickly distinguish whether it is a CCP failure, or a problem with the binaries used to init the GPDB cluster. By separating the tasks, it is easier to see when gpinit itself has failed. 3) The path to scripts used in CCP has changed Instead of all of the generic scripts being in `ccp_src/aws/` they are now in a better location of `ccp_src/scripts/`. 4) Paramater names have changed platform is now PLATFORM for all references in CCP jobs 5) NVME jobs Jobs that used NVME in AWS have been migrated to an identical feature for NVME in GCP but this does include a change to the terraform path specified in the job. 6) Instance types mapping from Ec2 to GCE The new paramater name for specifying instance type in GCP jobs is `instance_type`. There is not always a 1:1 match for instance types so there are slight differences in available resources for some jobs. Signed-off-by: NAlexandra Wang <lewang@pivotal.io>
-
- 29 3月, 2018 1 次提交
-
-
由 David Kimura 提交于
Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
- 21 3月, 2018 1 次提交
-
-
由 David Sharp 提交于
(cherry picked from commit 2df5035d)
-